Mitch Harder | 30 Aug 17:18 2012

Varying Leafsize and Nodesize in Btrfs

I've been trying out different leafsize/nodesize settings by
benchmarking some typical operations.

These changes had more impact than I expected.  Using a
leafsize/nodesize of either 8192 or 16384 provided a noticeable
improvement in my limited testing.

These results are similar to some that Chris Mason has already
reported:  https://oss.oracle.com/~mason/blocksizes/

I noticed that metadata allocation was more efficient with bigger
block sizes.  My data was git kernel sources, which will utilize
btrfs' inlining.  This may have tilted the scales.

Read operations seemed to benefit the most.  Write operations seemed
to get punished when the leafsize/nodesize was increased to 64K.

Are there any known downsides to using a leafsize/nodesize bigger than
the default 4096?

Time (seconds) to finish 7 simultaneous copy operations on a set of
Linux kernel git sources.

Leafsize/
Nodesize    Time (Std Dev%)
4096         124.7 (1.25%)
8192         115.2 (0.69%)
16384        114.8 (0.53%)
65536        130.5 (0.3%)

(Continue reading)

Josef Bacik | 30 Aug 18:25 2012

Re: Varying Leafsize and Nodesize in Btrfs

On Thu, Aug 30, 2012 at 09:18:07AM -0600, Mitch Harder wrote:
> I've been trying out different leafsize/nodesize settings by
> benchmarking some typical operations.
> 
> These changes had more impact than I expected.  Using a
> leafsize/nodesize of either 8192 or 16384 provided a noticeable
> improvement in my limited testing.
> 
> These results are similar to some that Chris Mason has already
> reported:  https://oss.oracle.com/~mason/blocksizes/
> 
> I noticed that metadata allocation was more efficient with bigger
> block sizes.  My data was git kernel sources, which will utilize
> btrfs' inlining.  This may have tilted the scales.
> 
> Read operations seemed to benefit the most.  Write operations seemed
> to get punished when the leafsize/nodesize was increased to 64K.
> 
> Are there any known downsides to using a leafsize/nodesize bigger than
> the default 4096?
> 

Once you cross some hardware dependant threshold (usually past 32k) you start
incurring high memmove() overhead in most workloads.  Like all benchmarking its
good to test your workload and see what works best, but 16k should generally be
the best option.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
(Continue reading)

Martin Steigerwald | 30 Aug 23:34 2012
Picon

Re: Varying Leafsize and Nodesize in Btrfs

Am Donnerstag, 30. August 2012 schrieb Josef Bacik:
> On Thu, Aug 30, 2012 at 09:18:07AM -0600, Mitch Harder wrote:
> > I've been trying out different leafsize/nodesize settings by
> > benchmarking some typical operations.
> > 
> > These changes had more impact than I expected.  Using a
> > leafsize/nodesize of either 8192 or 16384 provided a noticeable
> > improvement in my limited testing.
> > 
> > These results are similar to some that Chris Mason has already
> > reported:  https://oss.oracle.com/~mason/blocksizes/
> > 
> > I noticed that metadata allocation was more efficient with bigger
> > block sizes.  My data was git kernel sources, which will utilize
> > btrfs' inlining.  This may have tilted the scales.
> > 
> > Read operations seemed to benefit the most.  Write operations seemed
> > to get punished when the leafsize/nodesize was increased to 64K.
> > 
> > Are there any known downsides to using a leafsize/nodesize bigger
> > than the default 4096?
> 
> Once you cross some hardware dependant threshold (usually past 32k) you
> start incurring high memmove() overhead in most workloads.  Like all
> benchmarking its good to test your workload and see what works best,
> but 16k should generally be the best option.  Thanks,

I wanted to ask about 32k either.

I used 32k on one 2,5 inch external esata disk. But I never measured 
(Continue reading)

Josef Bacik | 30 Aug 23:50 2012

Re: Varying Leafsize and Nodesize in Btrfs

On Thu, Aug 30, 2012 at 03:34:49PM -0600, Martin Steigerwald wrote:
> Am Donnerstag, 30. August 2012 schrieb Josef Bacik:
> > On Thu, Aug 30, 2012 at 09:18:07AM -0600, Mitch Harder wrote:
> > > I've been trying out different leafsize/nodesize settings by
> > > benchmarking some typical operations.
> > > 
> > > These changes had more impact than I expected.  Using a
> > > leafsize/nodesize of either 8192 or 16384 provided a noticeable
> > > improvement in my limited testing.
> > > 
> > > These results are similar to some that Chris Mason has already
> > > reported:  https://oss.oracle.com/~mason/blocksizes/
> > > 
> > > I noticed that metadata allocation was more efficient with bigger
> > > block sizes.  My data was git kernel sources, which will utilize
> > > btrfs' inlining.  This may have tilted the scales.
> > > 
> > > Read operations seemed to benefit the most.  Write operations seemed
> > > to get punished when the leafsize/nodesize was increased to 64K.
> > > 
> > > Are there any known downsides to using a leafsize/nodesize bigger
> > > than the default 4096?
> > 
> > Once you cross some hardware dependant threshold (usually past 32k) you
> > start incurring high memmove() overhead in most workloads.  Like all
> > benchmarking its good to test your workload and see what works best,
> > but 16k should generally be the best option.  Thanks,
> 
> I wanted to ask about 32k either.
> 
(Continue reading)

Chris Mason | 31 Aug 02:01 2012

Re: Varying Leafsize and Nodesize in Btrfs

On Thu, Aug 30, 2012 at 03:50:08PM -0600, Josef Bacik wrote:
> On Thu, Aug 30, 2012 at 03:34:49PM -0600, Martin Steigerwald wrote:
> > I wonder what a good value for SSD might be. I tend to not use anymore 
> > than 16k, but thats just some gut feeling right now. Nothing based on a 
> > well-founded explaination.
> >
> 
> 32k really starts to depend on your workload.  Generally speaking everybody will
> be faster with 16k, but 32k starts to depend on your workload and hardware, and
> then anything about 64k really starts to hurt with memmove().  With this sort of
> thing SSD vs not isn't going to make much of a difference, erase blocks tend to
> be several megs in size so you aren't going to get anywhere close to avoiding
> the internal RMW cycle inside the ssd.  Thanks,

I almost made 16k the default, but the problem is that it does increase
lock contention because bigger nodes mean fewer locks.  You can see this
with dbench and compilebench, especially early in the FS life.

My goal is to make the cow step of btrfs_search_slot really atomic, so
we don't have to switch to a blocking lock.  That will really fix a lot
of contention problems.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo <at> vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

(Continue reading)

Roman Mamedov | 31 Aug 07:02 2012
Picon

Re: Varying Leafsize and Nodesize in Btrfs

On Thu, 30 Aug 2012 23:34:49 +0200
Martin Steigerwald <Martin <at> lichtvoll.de> wrote:

> I wanted to ask about 32k either.
> 
> I used 32k on one 2,5 inch external esata disk. But I never measured 
> anything so far.
> 
> I wonder what a good value for SSD might be. I tend to not use anymore 
> than 16k, but thats just some gut feeling right now. Nothing based on a 
> well-founded explaination.

If you look closely at https://oss.oracle.com/~mason/blocksizes/ , you will
notice that 16K delivers almost all of the 32K's performance gains in "Read",
while not suffering from slowdowns that 32K shows in "Create" and "Delete".

I have chosen 16K for my new /home partition (on an SSD+HDD mdadm RAID1).
But what disappointed me at the time, is that one can't seem to have a "mixed"
allocation FS with non-default leaf/node sizes.

--

-- 
With respect,
Roman

~~~~~~~~~~~~~~~~~~~~~~~~~~~
"Stallman had a printer,
with code he could not see.
So he began to tinker,
and set the software free."
(Continue reading)

Phillip Susi | 11 Oct 19:58 2012

Re: Varying Leafsize and Nodesize in Btrfs


On 8/30/2012 12:25 PM, Josef Bacik wrote:
> Once you cross some hardware dependant threshold (usually past 32k)
> you start incurring high memmove() overhead in most workloads.
> Like all benchmarking its good to test your workload and see what
> works best, but 16k should generally be the best option.  Thanks,
> 
> Josef

Why are memmove()s neccesary, can they be avoided, and why do they
incur more overhead with 32k+ sizes?

Martin Steigerwald | 12 Oct 12:32 2012
Picon

Re: Varying Leafsize and Nodesize in Btrfs

Am Donnerstag, 30. August 2012 schrieb Mitch Harder:
> I've been trying out different leafsize/nodesize settings by
> benchmarking some typical operations.
> 
> These changes had more impact than I expected.  Using a
> leafsize/nodesize of either 8192 or 16384 provided a noticeable
> improvement in my limited testing.
> 
> These results are similar to some that Chris Mason has already
> reported:  https://oss.oracle.com/~mason/blocksizes/
> 
> I noticed that metadata allocation was more efficient with bigger
> block sizes.  My data was git kernel sources, which will utilize
> btrfs' inlining.  This may have tilted the scales.
> 
> Read operations seemed to benefit the most.  Write operations seemed
> to get punished when the leafsize/nodesize was increased to 64K.
> 
> Are there any known downsides to using a leafsize/nodesize bigger than
> the default 4096?
> 
> 
> Time (seconds) to finish 7 simultaneous copy operations on a set of
> Linux kernel git sources.
> 
> Leafsize/
> Nodesize    Time (Std Dev%)
> 4096         124.7 (1.25%)
> 8192         115.2 (0.69%)
> 16384        114.8 (0.53%)
(Continue reading)

Martin Steigerwald | 12 Oct 14:52 2012
Picon

Re: Varying Leafsize and Nodesize in Btrfs

Am Freitag, 12. Oktober 2012 schrieb Martin Steigerwald:
> > Time (seconds) to finish 7 simultaneous copy operations on a set of
> > Linux kernel git sources.
> >
> > 
> >
> > Leafsize/
> > Nodesize    Time (Std Dev%)
> > 4096         124.7 (1.25%)
> > 8192         115.2 (0.69%)
> > 16384        114.8 (0.53%)
> > 65536        130.5 (0.3%)
> 
> Thanks for your testing, Mitch.
> 
> I would be interested in results for 32768 bytes as well.
> 
> Why?
> 
> It improves until 16384 bytes but then it gets worse with 65536 bytes.
> It  would be interesting to know whether it improves for 32768 or
> already gets worse with that value :)

Please ignore. I was answering to a old thread that was shown on top of 
message list to answer to Phillip again. We had that topic already.

Sorry for noise.

--

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
(Continue reading)


Gmane