-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Identifying Research Value #536
Comments
I agree with @justinhart, but I think this is already done (may be not well done) since every task says the focus of the task, which is supposed to be the open problem to solve. Also teams are encouraged to include in their TDP the novelty of their approach. I think what can be done to increase the scientific relevance of the league is to evaluate TDPs as the thing they are: papers. Usually, when a reviewer evaluates a paper focuses a lot in the contribution considering the current state of the art. |
I don't think that this is already accomplished. Could you, right now, identify an appropriate conference to publish results for every task in this rulebook? |
I think there are a lot of possible conferences and journals (ICRA, IROS, Autonomous Robots and Systems, Intelligent Service Robotics), but ok, may be the focus of each task should be rephrased. |
I can totally agree with putting some sort of statement of the research plan in the TDP, but I think we're in agreement here. You could go to each task and say: Task: I want this The idea here is to drive a metric where participation in the league has a clear connection to papers. What I'd really like to see out of this is something where maybe we all agree to submit to the same conference and/or write a paper as a league about the task solved in each stage. For instance, I'd really like to propose that we get a group who collaborate on a paper that goes to Ro-Man or HRI this year for "I Want This." |
Some comments here
With all that in mind, and after several years of witnessing failed attempts of taking @Home closer to the conference world, I'd say: pork it! The only viable solution I see is to build our own community of experts, stop depending on Elsevier and Springer, and peer-review our real-world solutions aiming to take science out of its golden cage of uselessness and start presenting it as the world-changing and goal-driven gorgon it is. |
Just today the comment was made on the thread about the laptop backpacks that the higher-ups would like to see more cooperation between teams, and more papers coming out of the league. This could become a route to making that happen. What if we just give it a try with the "I Want That" task. We get volunteers from the league and try to do a league-wide publication. I think that this could be a vector to the kind of research ROI that they're looking for. |
In my opinion to get research and RoboCup more aligned is to have a road map. Because if we keep jumping from left to right, how should anyone incorporate his/hers research with RoboCup. |
Propose tasks that your group finds interesting, but where you're not so far ahead as to make the competition unfair. |
What about having a roadmap is unfair for the competition? |
You misread my comment. I agreed with you and suggested that a short- term solution is for teams to suggest tasks. |
@justinhart I've this paper this paper with a roadmap for NLP targeting GPSR. Please give it a look. @mnegretev Please check with Jesus if you can come up with something similar for Navigation. It would be nice if you share the outcome with us and send it to the RoboCup symposium. @MatthijsBurgh TU/e unified world mode looks like a good cornerstone for a roadmap on common knowledge sharing. There is a good advance for manipulation with KnowRob on the how to, but there is no deep Analysis on what is important for a robot to know in order to understand its environment. This roadmap should be related with the progress of the others but on the side of abstraction. Please give it a thought. We require volunteers for a roadmap on Manipulation and all branches of Computer Vision (objects, people, poses, activities, reading, etc) |
@kyordhel I like the paper on the NLP roadmap. I think that @MatthijsBurgh and I, others who want to participate should develop a roadmap for what we want to accomplish as a league, overall, and then the basic structure of the tasks for each year should be to evaluate how well teams are performing overall on the tasks and then to set the challenge bar to one step beyond what they actually do. This would assure that the competition is exciting (because the robots succeed), while assuring that there is research value (because we don't know how to do everything). |
Honest question: Is robocup@home supposed to be a research exchange or a competition to benchmark service robots? |
It can't really be one without the other, can it? If you look at the soccer leagues, they normalize a set of tasks that makes soccer harder every year, try to push forward the state of the art in the field (which means original research), and do stuff that results in publications for the teams every year. The criticism that I've heard coming towards the league from the top involves the research productivity and cooperation among the league. Also, I think that for many participants, not all, but many, a several month break from research just isn't in the cards. I, personally, plan on pursuing a professorship eventually and can't lose research productivity in my group by putting research on pause for several months to compete if the competition is actually incapable of producing meaningful publications. I'm sure this is the case for others. |
@justinhart: With all due respect to soccer leagues, their approach and application domain is different and narrower from that of @Home. The Control community finds appealing all research there, as well as a fertile ground to test new approaches in a soccer field. The same does not apply in @Home which is more focused on A.I. As you brilliantly pointed out, researchers find moronic to constraint the available computing power. If your algorithm needs more juice, assign more cores. Well, that's not always possible in a robot and, unlike lab scientists, we have to deal with the problem. This often leads to nonpunishable results which are more like StackOverflow knowledge: unworthy of being published in a scientific journal, times more referenced and useful. @johaq I think both. You need to (somehow) benchmark performance and then exchange knowledge with those that did better than you. But performance is a tricky word here. Solving a task in a particular way with optimum performance is meaningless if the solution doesn't fit the user's personal preferences, so our robots need to be robust enough to deal with dynamic environments, flexible enough to adapt to the needs of each particular user, and skillful enough to solve the task efficiently. The term performance blurs and becomes meaningless. What are you aiming for? Robustness, speed, precision, adaptability, local optimization, global optimization? I can't give you an answer for most of our tasks are NP-complete. So far I think the best approach is ask grandpa, mom, or any household professional if robot did it well. |
@kyordhel I respect your perspective, but I think that we should be adopting something similar in @home. I'm actually losing participants because their supervisors don't think that they can be productive in this, and shouldn't the goal of a program that you run at a university to be to get everyone into good professorships and research positions after graduation? The way that you do that is by getting people more publishable research. Participating in a competition for the sake of participation doesn't get you there If we focus on publishable work:
I don't really see what's lost here. |
@justinhart I really don't want to start a discussion here about the topic. I understand your perspective and I must acknowledge it has been a long time concern in @Home. I have no problem with what you propose but I would slightly move the focus, while adding some constraints on top. We focus on solving tasks and test robot's performance
I don't want to see RoboCup@Home becoming a bunch of ad-hoc scientific experiments masked as tests. My proposal is to provide a set of test-beds systematically designed to allow research groups to try several approaches to solve a problem, compare results, improve, and build on top of successful solutions. From my perspective, publishable work should be a consequence of solving a task, not its objective. Nonetheless and as I said before, this competition is built by peers. PR tasks with your approach and if the TC approves them, yay!!! You changed RoboCup@Home |
Ok, small rant time: |
@johaq I'm not biting that bait. I can summarize with two key aspects.
My idea of happiness is an idea middle ground where you can do (2) and get your research published in a journal that values practical results over nice plot lines. Of course that won't happen, so sooner or later @Home needs to evolve to meet the expectations of those who feed us. |
The original proposition was simply to put the research problem into the description of the task. How is this such a contentious point? |
Sorry about the misunderstanding. I got you wanted to design tests aiming to directly fetch data for publishing. This is a screenshot of 2009 Rulebook. I guess you meant the Focus (or something like that) It has been present since then till... 2019. I removed them for sake of space (make the rulebook much compact). If you mean something like that, go ahead. |
YES! Though, slightly beyond this. I would say that the idea is to say what research communities the task targets in terms of current research topics, so, slightly beyond this and hinting at the kind of papers that could result. |
I suggest to grab one of the tests that are in the rule book, and add the section you want. Based on that PR, we can discuss details (which seem to be minor, in my opinion). Once that reaches consensus, just repeat that on the rest of the tests. |
That makes sense to me. Will do. I'm a bit backlogged, and I know that I'm behind, but I'll try to address this stuff quickly.
|
I don't have a problem with stating research focus in that way for every task if most people want that. I would prefer not to have it since we wanted the rule book to be as short and simple as possible and I think that tasks should be self explaining in that way. Do we really need to add that storing groceries is focused on manipulation? |
Absolutely! I think that people can take the time to read and write 2 lines of text. On the other hand, we could be writing tasks with absolutely no research value where the author can't identify the point of the task. The point is also to assure that we're not putting things that are complete wastes of time into the rulebook, which do nothing to bring us closer to domestic service robots. |
The rulebook already suffers for having so much narrative in it (makes it hard to find specific things because they're buried it text that has no force). Things that don't impact how the task is conducted or solved (like our assessment of what research would benefit performance) should be put somewhere else, like chapter 2, with the rest of the meta commentary (or in a separate research roadmap). |
My point is that no research value does not automatically mean not useful for @home. As an example: A quick glance at arxiv gives me two papers on a robot following a human in the last 12 month. Two more papers when including quadcopters following humans. Not exactly the hottest topic. I still think that it is an ability that we should consider including in @home since I feel it is very useful for domestic service robots. All I'm saying is that I feel the first and foremost question should be. "Do I want my service robot to be able to do this?" "Is this a hot current research topic?" should be secondary. EDIT: I think a problem that is the root of the ever changing rule book and this weird identity crisis that @home is in with the conflict of interests between SPLs and OPL and requests to be more like soccer is that there is no answer to the question: Is @home a research league first or an integration league first? |
Having no research value means it has no value for @home. I have no idea why I should get my research group to work on stuff with no research value. I know I'm not alone in feeling this way. |
I strongly disagree. If there is absolutely zero research value, maybe. But I do not see a strong correlation between research value and what robots in @home are currently lacking. If we go by research value we should think about turning @home into a deep learning challenge... |
So, when participants interview for postdocs and professorships, and when professorships go up for tenure, and they don't have many publications because they're spending time on RoboCup@Home, how do you suppose that they get hired? How do you suppose that we justify the expenditures of sponsors, or spending research funds on this? |
I think the goal of @home is to make service robots do things they cannot already do. It does not have to be a hot research topic. Anything that service robots cannot already do and cannot be trivially implemented probably has a good research question behind it. If the research value of integration is not appreciated as much as we believe, this is our chance to make the case as a community. One thing we can do is to identify the research questions and novel ideas that can be sparked by @home tests. Having something written down in the rulebook will help us focus efforts. |
I mean those are two different things. |
I would not say research value. It's ambiguous and can mean anything or nothing. What's your goal? What are you researching for? Example:
All 4 are research to me. Whether you use a novel approach or not, is pretty much irrelevant to @Home, we want stuff done. Each research group has its priorities. Now if having a couple of lines stating: this test is about people recognition and tracking will make some atHomers happy, add them. You can still make publications about control or real time if it fits you. |
@robotijn, you're the founder of the league, perhaps you can share your vision with us. @komeisugiura, @iocchi, @swachsmu You're trustees and/or have years of professorship getting people involved in @Home. Your insight is welcome here. |
Well, to push this further, what is the value in doing stuff with no research value? "We want stuff done," doesn't identify why we want stuff done. |
If the issue is because we don't want to use the word "Research", we could always use the word "Focus" (which we were already using some years past). This would even help teams decide which test to go for considering their strengths. If the issue is length, we could always establish that the "Focus" of the test can be no more than one line and not occupy a whole section. Something that can be inserted right below the overall description at the start of the page, something like: "Focus: navigation under unknown circumstances." |
@YuqianJiang and all. Well, right. The point is to make sure that we have some idea to assure that there is a point, and to identify what that point is ahead of time. I guess what I'm building to here is, supposing that we have a roadmap, how do we see all of these things fitting into that roadmap? Can we also do a triage on all of the tasks to see if the stuff is worthwhile. If it reduces to a few downloadable packages and a state machine saying, "Please put the thing into the thing for me," it's probably also a monumental waste of time. |
Alright. I'm going to make some last commentary, since this is clearly an unpopular motion. I never said that integration has no value. I don't want to fight about this, but I seriously don't get why this generated this level of resistance. |
Justin had closed this issue, but I am feeling not comfortable leaving it in this stage of discussion. Thus, Let me make some final meta-comments from a more senior perspective on this discussion (as requested by @kyordhel).
It is your task as TC to implement all 5 points in one rulebook, and this is not a simple task. In the discussion there have been great ideas (even beyond the current rulebook):
I would like to encourage you to think further in these directions. Finally, one point on research vs. integration: system integration is a requirement for the kind of research we are doing in RoboCup. As a league, we should do whatever we can to promote system-oriented research. This is a strength and distinguishes us from other benchmarks that are only based on datasets. But system integration is not a goal for itself. We are having the standard platforms because this (should) simplify system integration to a certain degree. Ideally (in future), there should be system distributions which can be just used by any new team for a direct start into RoboCup@Home, so that the team can concentrate on their own research and contribute it to RoboCup. Currently, much work is spent on integration, but one point on the RoboCup@Home roadmap should also be to significantly reduce this amount (this is a research question in the topic of system integration). Thus, I strongly want to encourage you to strenghten and keep the research aspect in the competition (and also using the name "research" for it) although it is a permanent struggle with being open to new teams, having teams on different levels, and placing system-oriented publications in the research |
Damn @swachsmu, you made me tear up a bit there. Full wholeheartedly agree. |
Dear all, I totally agree with Sven's summary (and with Mauricio's comment ;-))) Finally, the research focus in the rulebook should be expanded. In order to avoid confusion between research content and technicalities of the rules, we can create a separate chapter for each test with a research perspective of the test. This chapter can include for example: the scientific relevance of the problem, similar tasks presented in literature, available solutions, experiments made in laboratories, other competition or benchmarking activities addressing this or a similar task, novelty of the @Home task, Let's keep improving the quality of the rulebook, of the solutions and of the papers we write using @Home scenario. Best regards, |
Added focuses for all tests in #722. |
This year, a few times I have been approached to talk about the research value of RoboCup@Home. In particular, I have been asked to explain the research value in various venues, and to try to attract HRI researchers to our league.
I think that the best way to do this is to identify the concrete research problems that are addressed in each task. Therefore, I would propose that on each task in the rulebook, we identify the core research problem that is addressed (which should be a current, open problem). This should be a problem that we could easily envision a resulting publication for, if the team comes up with a novel solution. The research problem would be placed in the introduction of each task, in bold, in a black outlined box.
For the "I Want This" task, it would look something like this.
Research Problem: Generation of legible gaze and pointing gestures, interpretation of human gaze and pointing gestures.
The text was updated successfully, but these errors were encountered: