commits and recoveries

Moderator: NorbertKrupa

Post Reply
DGh
Newbie
Newbie
Posts: 14
Joined: Tue Oct 08, 2013 10:10 pm

commits and recoveries

Post by DGh » Mon Feb 24, 2014 8:22 am

All,
I have a few questions around the behavior of commits and when actual data is written to the disk.

It seems like if I commit a really big transaction 'A' - it might take a long time after the commit has been issued for the move-out process to move the data from WOS to ROS, and until that happens , the last_good_epoch is not going to move.
during that time, if I run and commit several small transactions(unrelated to A), the current_epoch will move forward - and probably the data will be written to disk.
However,if the db crashes somewhere during this time(after the small transactions were written to disk and the big transaction A was in middle of move out)- the last_good_epoch will still be where it was when A started, and I could only recover the database to that point.

Is that a correct understanding?If it is , it looks like one big bad DML can cause adverse effect on the db as a whole .

Also, I think i read that each node has its own WOS.So if a node goes down in a K-safe env, what happens to the data in the WOS?

thanks a lot for your help
Dgh

id10t
GURU
GURU
Posts: 732
Joined: Mon Apr 16, 2012 2:44 pm

Re: commits and recoveries

Post by id10t » Mon Feb 24, 2014 8:51 am

Hi!

[DELETED]
Last edited by id10t on Fri May 08, 2015 4:36 pm, edited 1 time in total.

scutter
Master
Master
Posts: 302
Joined: Tue Aug 07, 2012 2:15 am

Re: commits and recoveries

Post by scutter » Mon Feb 24, 2014 2:54 pm

>Sharon is a champ in this question

I’ve been summoned ;-)

Hi Dgh,

Your interpretation is correct. The LGE controls the recovery point for the database. If the cluster-wide LGE stays behind beause a large amount of data hasn’t yet moved out of the WOS, or gets stuck in the WOS because it’s unable to moveout (long replay-delete work by the tuple mover can cause this), then a database crash would result in a rollback to that LGE that is back in time. You would rollback not only the data from that load, but all data loaded since that load. If only a subset of nodes haven’t managed to moveout the data from the WOS, then recovery may be able to rollback to a later epoch.

For these reasons it’s important to monitor for Stale Checkpoint events and to take them seriously. Stale Checkpoints across the entire cluster means that the LGE has fallen behind.

Is your use case such that you really need this large load to go into the WOS and not DIRECT to the ROS? Have you increased the size of the wosdata pool? If not then the WOS is only 2G and would be spilling data to ROS anyway. How long is it taking for <2G of WOS data to move out?

—Sharon
Sharon Cutter
Vertica Consultant, Zazz Technologies LLC

id10t
GURU
GURU
Posts: 732
Joined: Mon Apr 16, 2012 2:44 pm

Re: commits and recoveries

Post by id10t » Mon Feb 24, 2014 6:04 pm

Hi!

[DELETED]
Last edited by id10t on Fri May 08, 2015 4:35 pm, edited 1 time in total.

DGh
Newbie
Newbie
Posts: 14
Joined: Tue Oct 08, 2013 10:10 pm

Re: commits and recoveries

Post by DGh » Wed Feb 26, 2014 6:46 am

Thank you Skwa and Sharon for the explanation.

@Sharon - most of the ETL processes we have use the direct hint so they write into ROS.
We have had some bad experiences where user dmls (to their own schema- but in the same database) went rogue and took up a lot of WOS memory, causing the last good epoch to lag substantially behind the current epoch. I am not sure if that's because of the move-out process not being tuned properly, but will watch closely next time it happens and hopefully be able to capture more info.

thanks again!

Post Reply

Return to “New to Vertica”