-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modelling designators for logging #318
base: master
Are you sure you want to change the base?
Conversation
Here is some info, badly formatted, I am sorry, but I hope it helps ^^
However we usually just use: names, types (yes plural since the designator description accepts a list of those). The designator usually gets resolved by perception, who fills out as many fields as possible.
Location(furniture_item='coffee table', room='living room') This gets resolved (via KnowRob in this case) to a list of dict entries: [{'Item': {'value': 'http://www.ease-crc.org/ont/SUTURO.owl#CoffeeTable',
'link': 'iai_kitchen/coffee_table:coffee_table:table_center',
'room': 'http://www.ease-crc.org/ont/SUTURO.owl#LivingRoom_RTKDJVBC',
'pose': header:
seq: 0
stamp:
secs: 1725356965
nsecs: 826580762
frame_id: "map"
pose:
position:
x: 8.435593626312391
y: -0.2637401054035722
z: 0
orientation:
x: 0.0
y: 0.0
z: 0.024997395914712332
w: 0.9996875162757026}}]
Some examples: Abstract descriptions action = ActionDesignator(type ='navigate',
target_locations = Location(furniture_item=furniture_item,
room=room))
action.resolve().perform() with semi_real_robot:
action = ActionDesignator(type='detect',
technique=PerceptionTechniques.ALL,
object_designator = ObjectDesignatorDescription(types = [ObjectType.JEROEN_CUP]))
action = action.resolve().perform() Questions
|
tl;dr:
object_desig = ObjectDesignatorDescription(types = [ObjectType.CUP]])
nav_location = Location(furniture_item='kitchen counter', room='kitchen')
nav_action = ActionDesignator(type ='navigate', target_locations = nav_location)
nav_action.resolve().perform()
detect_action = ActionDesignator(type='detect',
technique=PerceptionTechniques.ALL,
object_designator = object_desig)
detect_action = detect_action.resolve().perform() Location Designator parametersafter creation: {'args': (),
'semantic_poses': [],
'poses': [],
'pose': None,
'kwargs': {'furniture_item': 'kitchen counter', 'room': 'kitchen'},
'urdf_link': None} Same designator after running location.ground() {'args': (),
'semantic_poses': [{'
{'Item': {'value': 'http://www.ease-crc.org/ont/SUTURO.owl#KitchenCounter',
'link': 'iai_kitchen/kitchen_counter:kitchen_counter:table_center',
'room': 'http://www.ease-crc.org/ont/SOMA.owl#Kitchen_HUGQVWLM',
'pose': {.....}}]
'kwargs': {'furniture_item': 'kitchen counter', 'room': 'kitchen'},
'urdf_link': 'iai_kitchen/kitchen_counter:kitchen_counter:table_center'} ActionActionDesignator(type ='navigate', target_locations = nav_location)
# pre-resolution:
{'resolve': <bound method NavigateAction.ground of <pycram.designators.action_designator.NavigateAction object at 0x7f3ef0e47970>>,
'ontology_concept_holders': [<pycram.ontology.ontology_common.OntologyConceptHolder at 0x7f3f0c431c10>],
'exceptions': {},
'state': None,
'executing_thread': {},
'threads': [],
'interrupted': False,
'name': 'NavigateAction',
'soma': get_ontology("http://www.ease-crc.org/ont/SOMA.owl#"),
'target_locations': Location(pose=None)}
# post-resolution:
{'resolve': <bound method NavigateAction.ground of <pycram.designators.action_designator.NavigateAction object at 0x7f3ef0e47970>>,
'ontology_concept_holders': [<pycram.ontology.ontology_common.OntologyConceptHolder at 0x7f3f0c431c10>],
'exceptions': {},
'state': None,
'executing_thread': {},
'threads': [],
'interrupted': False,
'name': 'NavigateAction',
'soma': get_ontology("http://www.ease-crc.org/ont/SOMA.owl#"),
'target_locations': [{POSES]} |
This is a draft pr open for discussion on modelling designators. Type of queries we want to support: