Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing dependent=>destoy deps on target images #91

Open
jprovaznik opened this issue Dec 18, 2012 · 3 comments
Open

Missing dependent=>destoy deps on target images #91

jprovaznik opened this issue Dec 18, 2012 · 3 comments

Comments

@jprovaznik
Copy link
Contributor

Dependent objects in "object chain" are not destroyed currently, e.g. if I destroy a base image, image versions associated with this base image are kept. Same situation is for all objects. This issue handles the first level of this, so if you delete a target image, it hooks into the state machine properly, call factory when it should, and cascades to provider images properly

@jguiditta
Copy link
Member

We discussed this briefly in irc, @sseago made a convincing argument about leaving orphaned objects. We should assemble a list of what objects need this setting, and in what direction (if applicable) before starting work

@jprovaznik
Copy link
Contributor Author

Conductor expects that all has_many associations in direction from base image to provider image are destroyed.
If a base image is destroyed, all image versions are destroyed
if an image version is destroyed, all target images are destroyed
if target image is destroyed, all provider images are destroyed

In addition, currently also template should be deleted if associated base image is deleted, but this is conductor specific behavior - we should take care of template deletion inside conductor.

@jguiditta
Copy link
Member

We will work on this in parallel with factory as much as possible, they are going to implement cascading on their side as well (redhat-imaging/imagefactory#219). It does make for some more thought needed around process on Tim's side. We may also want to 'fix' our base_image to actually wrap factories concept of a base image, instead of being something different as it is now (this is under discussion as part of #97). Assuming they were the same for the moment, if we delete a base image via the tim api, factory would notify us of successful/failure once the base image successfully deleted on their side. I am concerned this will make things a bit messy to implement on our side. We probably wont be able to use straight :dependent => destroy, but have to include some conditions and callbacks. So calling delete might actually pass through the request to factory, then set status on this (and child objects?) to something like PENDING_DELETION. Then when we get the call that all went well, we can go ahead and destroy our entire tree as well. Maybe I am over thinking this, but it feels like this could quickly turn in a mess.

The behavior higher up the tree would then be different, as those objects would receive a direct callback notifying, say, target_image that delete succeeded. This would then cascade one layer further to provider images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants