Re: ::Requesting for suggestions on Slony Replication of 3 TB Of Data::
On Fri, Jun 8, 2012 at 3:30 PM, dinesh kumar <dineshkumar02@...> wrote:
> Hi Steve,
> Thank you for the e-mail ..
> On Sat, Jun 9, 2012 at 1:39 AM, Steve Singer <ssinger@...>
>> On 12-06-08 02:50 PM, dinesh kumar wrote:
>>> Thank you so much Steve ...
>>> We are using Slony 2.1.x ... We are still at the initial stage only...
>>> Slony is still trying to copy the data from source to destination host ..
>>> >> Has your initial copy_set finished?
>>> Here only, we are facing problem... Slony struggling to copy the data ..
>>> It went well upto 140 GB. From 141GB onwards the build started slow..
>>> Now it's very very very slow ...
>>> Let me know if you need any configurations what we have used ...
>> Are you doing this copy over a WAN?
> We are doing copy over LAN.. We got very good transfer rate for up to some
> extent.. From after some time (i'm not sure the exact time), the build got
We need to know why.
>> Can you tell what the bottleneck is? Are you IO bound on the slave? Are
>> you CPU bound on the slave?
> We don't face any performance issues. IO is really good on
> production(Primary Cluster where it is 3 TB) and having normal CPU average
What does iostat -xd 10 or such have to say about the performance on
the new slony destination when it's slow?
> We think, it's problem from the db itself.. We have around 400GB tables (5
> to 8), those are having huge bloats also.. We think, slony is taking some
> time to get those live rows bypassing the dead rows from DB. So we think,
> the actual live rows might scatter to multiple files. I think this might be
> the root cause ..
I doubt bloat in the source database would make things 1GB/hr slow.
That's only 16MB/minute.
> We just stopped the existing slony setup and will perform vacuum on DB once
> and will do it again ...
Yeah, this time try and figure out what the bottleneck is via iostat,
vmstat, iotop, nettop, select * from pg_stat_activity and so on.