From: Alexey K. <a.k...@sa...> - 2003-11-25 20:50:38
|
> On Sunday 23 November 2003 07:36, Zhenja Kaluta wrote: > > I have volume (on any kind of region -- single partition, raid, lvm > > region ...) > > > > create snapshot of this volume on lvm region which lay on any kind of > > raid, for example > > > > Snapshot > > -- > > lvm/LMV_Container > > -- > > md0 (raid region) > > -- hda1 > > -- hdb1 > > > > > > and activate it (create volume) > > make some disk operations (volume - good, snapshot - good) > > rollback snapshot. > > Everything is fine. > > remove snapshot (or disactivate -- remove volume) > > > > Change data on volume > > > > Create (or ativate again) snapshot > > Change data > > Rollback > > > > You have snapshot and volume for the moment of creating first > > snapshot. > > > > I see the problem in caching operations in dm, but may by wrong. > > I can't seem to reproduce this problem. Here's the test I'm running. > > - Create three disk segments on hdd, each 500 MB. > - Create /dev/evms/Org from hdd1, and add an ext3 filesystem. > - Mount /dev/evms/Org and copy a 2.4.17 kernel tree to the volume. > > - Create a RAID-1 (mirror) region with hdd2 and hdd3. > - Create an LVM container with the RAID-1. > - Create one 250 MB LVM region named snap1_lv. > - Create a snapshot of /dev/evms/Org named /dev/evms/Snap1 on top of the LVM > region. > - Remove the 2.4.17 kernel tree from Org. Copy a 2.4.18 tree to Org. Verify > the 2.4.17 tree still exists on Snap1. > - Unmount Org and Snap1. > > - Rollback Snap1 to Org. > - Verify the 2.4.17 tree on Org. > > - Remove the Snap1 volume. > - Remove the 2.4.17 tree from Org. > - Copy a 2.4.18 kernel tree to Org. > - Recreate the snapshot Snap1 (same as above). > - Remove the 2.4.18 tree from Org and copy a 2.4.19 tree to Org. Verify the > 2.4.18 tree still exists on Snap1. > - Unmount Org and Snap1. > > - Rollback Snap1 to Org. > - Verify the 2.4.18 tree on Org. > > > This sounds like the test you described. It works as expected on my machine. > Each time I rollback the snapshot, the origin is restored to the correct > state. > > Let me know if you'd like me to try a another test, or if there's anything > different about the tests you're running. Corry, I've experienced the same problem with Zhenja. First of all I'd like to mention that all of this happens with evms-2.1.1/DM v4. Below is scenario you should run into the same troubles :) -- create Raid1 region "md0" on hda5, hdb5; -- create LVM container "LVM1" on top of "md0"; -- create a couple of LVs: "LV1", "LV2" on "LVM1"; -- create volume from "LV1", "Volume1" with XFS on board; -- mount and write some data into "Volume1"; -- create snapshot of that volume "Snap1" from "LV2"; * activate snapshot, get volume "Snap1"; (here is a *tick*) -- mount "Snap1" and observe that all data look good; -- mount "Volume1" then delete some files; -- unmount all considered volumes, do rollback; -- mount "Volume1", everything is OK, data recovered from "Snap1"; -- add/delete some files to/from "Volume1"; ** do snapshot reset; (another one *tick*) -- mount "Snap1", all data are in sync with "Volume1" at this moment, good; -- change "Volume1" (e.g. delete all files); -- unmount both "Volume1" and "Snap1"; -- rollback "Snap1"; -- mount "Volume1" and observe data at the moment of the ticked (*) item, but expected to be at the item (**).. that is it. regards, Alexey ICQ:97715595 -- Your supervisor is thinking about you. |