Re: HBase HDFS disk space usage
We created our own MetricContext for reading these metrics. Basically your
metric context gets called every X seconds based on your
hadoop-metrics.properties, so you can add whatever else you want in there.
We also were concerned with HDFS usage, and while we couldn't get it to
pull that in specifically we did use the java File API to just get the
current used disk space for our various mounted drives. This has worked
reasonably well, though does not mirror HDFS usage exactly. This is
per-server though as opposed to per-table or whatever.
You can take a look at the GangliaContext for an example, in fact our
MetricContext extends GangliaContext so we can still report to ganglia but
also report to our own status system as well. Just put it in a jar, put
the jar on the classpath, and reference it in your
On Mon, May 7, 2012 at 9:37 AM, Doug Meil <doug.meil@...>wrote:
> You're right, it's not currently a metric.
> But there is an entry for the disk usage here...
> On 5/6/12 10:41 PM, "Otis Gospodnetic"