-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix location marker out of range #289
Fix location marker out of range #289
Conversation
if (count == 0) | ||
{ | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the same protection needed in HandleMarkerAdded? While marker_count should never be zero when a marker is added, might it be possible for a marker to be added and removed prior to the slot being called?
OK, having assertions relieved a few issues to me. I switched to using before and after addition/removal singles to fix all the assertion issues, and added another mutex to avoid the signals working with outdated indices in comparison to the actual data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few final comments. Let me know if you'd like to make additional changes based on the comments, or if you're good with what you have. If you're good, I'll do some final testing and get this merged.
* 6. emit end signals | ||
* 7. Unlock markerRecordModifyMutex_ | ||
* | ||
* markerRecordAccessMutex_ prevents reads during the actual right. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right -> write
(I'd only update this if you're making additional changes)
* 3. lock (std::unique_lock) markerRecordAccessMutex_ | ||
* 4. Do changes | ||
* 5. Unlock markerRecordAccessMutex_ | ||
* 6. emit end signals |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slot connections (auto, which is the correct usage here) are not guaranteed to be called at the time of the signal being emitted, only for direct connections (when the slot object is in the same thread as the signal being emitted). When two the slot object is in a different thread (which happens on initialization), a queued connection is made, and the slot will be called in its own thread (which will occur during the next iteration of the Qt event loop).
I don't see any deadlocks, so this is probably OK. I don't see any slots calling a function to modify. Although, I think your error handling in the MarkerModel is sufficient to cover race conditions (if the index doesn't exist anymore, return empty QVariant, etc.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slot connections (auto, which is the correct usage here) are not guaranteed to be called at the time of the signal being emitted, only for direct connections (when the slot object is in the same thread as the signal being emitted). When two the slot object is in a different thread (which happens on initialization), a queued connection is made, and the slot will be called in its own thread (which will occur during the next iteration of the Qt event loop).
It seems to be working, but I was getting asserts within beginRemoveRows
because I was removing from the manager before calling it. I think that could still happen then. I can think of two ways to fix it.
- Turn the model into a singleton, and call functions directly (instead of using slots) from the manager.
- Have all insert and delete calls go through the model first, then call the manager. This would likely require the model be a singleton as well.
1 will require fewer changes, but I think 2 is the closer to the way Qt intended models to work. I prefer 1, but I would not mind doing 2 either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I like having the manager call the model directly, or having all insert/delete go through the model (unless it's a combined model/manager similar to the layer model), I think the dependency should be the other way around. The way current managers/models are patterned in the code, the manager is the source of truth for the data, which I'd like to keep.
I think the model being a singleton is just fine. There are other singleton models already.
The placefile model and manager are probably the best examples of this working together currently. The placefile model keeps a thin copy of placefile names to use as a reference into the manager for other operations. It's this data structure that the begin/end operations bound. To mitigate a potential data race, if the data in the underlying manager changes and the name goes stale, the placefile manager returns a default value, and any mismatch in display of the data should be resolved on the next Qt event loop frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a branch that works more like placefile manager/model. I use a 64 bit integer as an ID. The ID's are local to the process, not saved to the json. Making IDs persistent would require more changes, but would be possible. It also would reverts the last two commits. I can integrate it into this pull request if you think this makes more sense then relying on things happening in the same thread for correct ordering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can take a closer look later, but I don't think IDs necessarily need to be persistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of how it diverged, I thought it would be better and simpler to have it be its own pull request.
Closing in favor of alternate pull request. |
The fix itself is really simple, although I am not able to replicate the assertion failure on Linux, so that should be double checked (although the tests should catch it)
The test cases are a bit complex because of threading. If they are too complex, then they can be removed, or moved to a separate pull request.