Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identifying Research Value #536

Closed
justinhart opened this issue Feb 1, 2019 · 43 comments
Closed

Identifying Research Value #536

justinhart opened this issue Feb 1, 2019 · 43 comments
Assignees

Comments

@justinhart
Copy link
Member

This year, a few times I have been approached to talk about the research value of RoboCup@Home. In particular, I have been asked to explain the research value in various venues, and to try to attract HRI researchers to our league.

I think that the best way to do this is to identify the concrete research problems that are addressed in each task. Therefore, I would propose that on each task in the rulebook, we identify the core research problem that is addressed (which should be a current, open problem). This should be a problem that we could easily envision a resulting publication for, if the team comes up with a novel solution. The research problem would be placed in the introduction of each task, in bold, in a black outlined box.

For the "I Want This" task, it would look something like this.

Research Problem: Generation of legible gaze and pointing gestures, interpretation of human gaze and pointing gestures.

@mnegretev
Copy link

mnegretev commented Feb 1, 2019

I agree with @justinhart, but I think this is already done (may be not well done) since every task says the focus of the task, which is supposed to be the open problem to solve. Also teams are encouraged to include in their TDP the novelty of their approach.

I think what can be done to increase the scientific relevance of the league is to evaluate TDPs as the thing they are: papers. Usually, when a reviewer evaluates a paper focuses a lot in the contribution considering the current state of the art.
Thus, may be should be mandatory for teams to include in the TDP a good review of the literature and emphasize the contribution with respect to the state of the art, not only with respect to the previous team work. If this is done, it could ease the writing of a future paper.

@justinhart
Copy link
Member Author

I don't think that this is already accomplished. Could you, right now, identify an appropriate conference to publish results for every task in this rulebook?

@mnegretev
Copy link

I think there are a lot of possible conferences and journals (ICRA, IROS, Autonomous Robots and Systems, Intelligent Service Robotics), but ok, may be the focus of each task should be rephrased.
Anyway, reviewing the state of the art and emphasizing contribution is something that is done in every scientific paper. Making mandatory (not only encouraged) for all teams to include the contribution with respect to the state of the art could also help to increase the scientific relevance. Together with the rephrasing of the task focus to better state the open problem to solve.

@justinhart
Copy link
Member Author

I can totally agree with putting some sort of statement of the research plan in the TDP, but I think we're in agreement here. You could go to each task and say:

Task: I want this
Problem: Generation & Interpretation of Gaze & Point Gestures
Venue: Ro-Man, HRI, IJSR

The idea here is to drive a metric where participation in the league has a clear connection to papers. What I'd really like to see out of this is something where maybe we all agree to submit to the same conference and/or write a paper as a league about the task solved in each stage. For instance, I'd really like to propose that we get a group who collaborate on a paper that goes to Ro-Man or HRI this year for "I Want This."

@kyordhel
Copy link
Contributor

kyordhel commented Feb 1, 2019

Some comments here

  • Messing with TDPs is extremely complex. Turn them into actual papers (we did) and you'll get a lot of teams with awesome research that can't get a robot going, while good teams (finalists) with more technical approach will be sorted out. This also impacts directly teams formed mostly with undergrads. Peek a look to the winners in last 3 years and they aren't necessarily the tip of the spear in SotA research.

  • Till 2018 all tests had a foci, although nobody really paid attention to that. We tried to standardize testing to produce benchmarking and the outcome wasn't good. We are changing our approach to a goal-driven one and for 2019 we care little on how people solved the tasks as long as the task is solved. IMHO, that's terrible in terms of producing data for scientific publishing and team collaboration in research. However, I'm quite sure it will push innovation.

  • The scientific community values little to nothing application. Most of the techniques and skills we use are considered as solved or surpassed since a decade ago in lab conditions. Mount that thing into a robot and coordinate everything and reviewers will tell you: that's an engineering problem, is not science or buy better hardware or your experiments are not repeatable. In all cases your paper is most likely to get rejected. We try to deal with real-world condition as much as we can. In this aspect, I think @Home is closer to industry than to academia.

  • A robot can only run algorithms that would fit in a laptop. That prevents us from using the most advanced techniques until they are mature enough to fit. We already tried external computing and found it unreliable. Further, our people needs expertise in several areas to merge everything together, meaning that the learning curves are stepper and much more effort is required to keep updated.

With all that in mind, and after several years of witnessing failed attempts of taking @Home closer to the conference world, I'd say: pork it! The only viable solution I see is to build our own community of experts, stop depending on Elsevier and Springer, and peer-review our real-world solutions aiming to take science out of its golden cage of uselessness and start presenting it as the world-changing and goal-driven gorgon it is.

@kyordhel kyordhel closed this as completed Feb 1, 2019
@kyordhel kyordhel reopened this Feb 1, 2019
@justinhart
Copy link
Member Author

Just today the comment was made on the thread about the laptop backpacks that the higher-ups would like to see more cooperation between teams, and more papers coming out of the league. This could become a route to making that happen.

What if we just give it a try with the "I Want That" task. We get volunteers from the league and try to do a league-wide publication. I think that this could be a vector to the kind of research ROI that they're looking for.

@MatthijsBurgh
Copy link
Collaborator

In my opinion to get research and RoboCup more aligned is to have a road map. Because if we keep jumping from left to right, how should anyone incorporate his/hers research with RoboCup.

@justinhart
Copy link
Member Author

Propose tasks that your group finds interesting, but where you're not so far ahead as to make the competition unfair.

@MatthijsBurgh
Copy link
Collaborator

What about having a roadmap is unfair for the competition?

@justinhart
Copy link
Member Author

You misread my comment. I agreed with you and suggested that a short- term solution is for teams to suggest tasks.

@kyordhel
Copy link
Contributor

kyordhel commented Feb 2, 2019

@justinhart I've this paper this paper with a roadmap for NLP targeting GPSR. Please give it a look.

@mnegretev Please check with Jesus if you can come up with something similar for Navigation. It would be nice if you share the outcome with us and send it to the RoboCup symposium.

@MatthijsBurgh TU/e unified world mode looks like a good cornerstone for a roadmap on common knowledge sharing. There is a good advance for manipulation with KnowRob on the how to, but there is no deep Analysis on what is important for a robot to know in order to understand its environment. This roadmap should be related with the progress of the others but on the side of abstraction. Please give it a thought.

We require volunteers for a roadmap on Manipulation and all branches of Computer Vision (objects, people, poses, activities, reading, etc)

@justinhart
Copy link
Member Author

@kyordhel I like the paper on the NLP roadmap. I think that @MatthijsBurgh and I, others who want to participate should develop a roadmap for what we want to accomplish as a league, overall, and then the basic structure of the tasks for each year should be to evaluate how well teams are performing overall on the tasks and then to set the challenge bar to one step beyond what they actually do. This would assure that the competition is exciting (because the robots succeed), while assuring that there is research value (because we don't know how to do everything).

@johaq
Copy link
Member

johaq commented Feb 5, 2019

Honest question: Is robocup@home supposed to be a research exchange or a competition to benchmark service robots?

@justinhart
Copy link
Member Author

It can't really be one without the other, can it?

If you look at the soccer leagues, they normalize a set of tasks that makes soccer harder every year, try to push forward the state of the art in the field (which means original research), and do stuff that results in publications for the teams every year.

The criticism that I've heard coming towards the league from the top involves the research productivity and cooperation among the league.

Also, I think that for many participants, not all, but many, a several month break from research just isn't in the cards. I, personally, plan on pursuing a professorship eventually and can't lose research productivity in my group by putting research on pause for several months to compete if the competition is actually incapable of producing meaningful publications. I'm sure this is the case for others.

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

@justinhart: With all due respect to soccer leagues, their approach and application domain is different and narrower from that of @Home. The Control community finds appealing all research there, as well as a fertile ground to test new approaches in a soccer field. The same does not apply in @Home which is more focused on A.I.

As you brilliantly pointed out, researchers find moronic to constraint the available computing power. If your algorithm needs more juice, assign more cores. Well, that's not always possible in a robot and, unlike lab scientists, we have to deal with the problem. This often leads to nonpunishable results which are more like StackOverflow knowledge: unworthy of being published in a scientific journal, times more referenced and useful.

@johaq I think both. You need to (somehow) benchmark performance and then exchange knowledge with those that did better than you. But performance is a tricky word here. Solving a task in a particular way with optimum performance is meaningless if the solution doesn't fit the user's personal preferences, so our robots need to be robust enough to deal with dynamic environments, flexible enough to adapt to the needs of each particular user, and skillful enough to solve the task efficiently. The term performance blurs and becomes meaningless. What are you aiming for? Robustness, speed, precision, adaptability, local optimization, global optimization? I can't give you an answer for most of our tasks are NP-complete.

So far I think the best approach is ask grandpa, mom, or any household professional if robot did it well.

@justinhart
Copy link
Member Author

@kyordhel I respect your perspective, but I think that we should be adopting something similar in @home. I'm actually losing participants because their supervisors don't think that they can be productive in this, and shouldn't the goal of a program that you run at a university to be to get everyone into good professorships and research positions after graduation? The way that you do that is by getting people more publishable research. Participating in a competition for the sake of participation doesn't get you there

If we focus on publishable work:

  1. We fix the problems coming from above.
  2. We attract more top researchers to our league.
  3. We get better jobs when we graduate/move on.
  4. We can still have a great competition.

I don't really see what's lost here.

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

@justinhart I really don't want to start a discussion here about the topic. I understand your perspective and I must acknowledge it has been a long time concern in @Home. I have no problem with what you propose but I would slightly move the focus, while adding some constraints on top.

We focus on solving tasks and test robot's performance

  • In dynamic, real world scenarios
  • Trying to apply state-of-the-art research into the solutions
  • Choosing tasks that are relevant in domestic environments (i.e. satisfy the needs of the potential market)
  • Without forcing a predefined solution or strategy
  • With tests designed to have scientific relevance
  • Aiming to produce solutions which derive in scientific publications

I don't want to see RoboCup@Home becoming a bunch of ad-hoc scientific experiments masked as tests. My proposal is to provide a set of test-beds systematically designed to allow research groups to try several approaches to solve a problem, compare results, improve, and build on top of successful solutions. From my perspective, publishable work should be a consequence of solving a task, not its objective.

Nonetheless and as I said before, this competition is built by peers. PR tasks with your approach and if the TC approves them, yay!!! You changed RoboCup@Home

@johaq
Copy link
Member

johaq commented Feb 5, 2019

Ok, small rant time:
The reason why I posed the question in a little provocative way is that I see a problem with trying to get everything to fit "current" research questions in @home. Do we want robots in @home to learn a specific grasp motion over thousands of iterations in a lab and then transfer learn the trajectory to fit other objects but in the same lab setting with the object sitting in the exact same spot or do just want a robot to get a can from table a to shelf b. The first one is a current research topic the second one is useful for @home.
My experience is that people do not want to test their research on robots not because it does not fit their research topic but because it would show that their research is not even close to applicable in the real world yet and does not work even half as good on a robot than their published "state of the art accuracy".

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

@johaq I'm not biting that bait. I can summarize with two key aspects.

  1. You can find thousands of strong arguments on both sides: a) "robots are ready to deploy in <10 years time", and b) "we're nowhere close". One eases finding funding, the other doesn't. Also there is a lot of specialized literature pointing out that state-of-the-art research is "unreliable" in industry for several reasons (unless your headquarters are in Silicon Valley).
  2. Many researchers like RoboCup@Home because you can test that 1950 article that wasn't implementable then. Results won't get accepted in any journal, but allow a robot with a Raspberry PI to solve a task decently.

My idea of happiness is an idea middle ground where you can do (2) and get your research published in a journal that values practical results over nice plot lines. Of course that won't happen, so sooner or later @Home needs to evolve to meet the expectations of those who feed us.

@justinhart
Copy link
Member Author

The original proposition was simply to put the research problem into the description of the task. How is this such a contentious point?

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

Sorry about the misunderstanding. I got you wanted to design tests aiming to directly fetch data for publishing.

This is a screenshot of 2009 Rulebook. I guess you meant the Focus (or something like that)

image

It has been present since then till... 2019. I removed them for sake of space (make the rulebook much compact). If you mean something like that, go ahead.

@justinhart
Copy link
Member Author

YES!

Though, slightly beyond this. I would say that the idea is to say what research communities the task targets in terms of current research topics, so, slightly beyond this and hinting at the kind of papers that could result.

@balkce
Copy link
Contributor

balkce commented Feb 5, 2019

I suggest to grab one of the tests that are in the rule book, and add the section you want. Based on that PR, we can discuss details (which seem to be minor, in my opinion). Once that reaches consensus, just repeat that on the rest of the tests.

@justinhart
Copy link
Member Author

justinhart commented Feb 5, 2019 via email

@johaq
Copy link
Member

johaq commented Feb 5, 2019

I don't have a problem with stating research focus in that way for every task if most people want that. I would prefer not to have it since we wanted the rule book to be as short and simple as possible and I think that tasks should be self explaining in that way. Do we really need to add that storing groceries is focused on manipulation?

@justinhart
Copy link
Member Author

Absolutely! I think that people can take the time to read and write 2 lines of text. On the other hand, we could be writing tasks with absolutely no research value where the author can't identify the point of the task. The point is also to assure that we're not putting things that are complete wastes of time into the rulebook, which do nothing to bring us closer to domestic service robots.

@nickswalker
Copy link
Member

The rulebook already suffers for having so much narrative in it (makes it hard to find specific things because they're buried it text that has no force). Things that don't impact how the task is conducted or solved (like our assessment of what research would benefit performance) should be put somewhere else, like chapter 2, with the rest of the meta commentary (or in a separate research roadmap).

@johaq
Copy link
Member

johaq commented Feb 5, 2019

My point is that no research value does not automatically mean not useful for @home. As an example: A quick glance at arxiv gives me two papers on a robot following a human in the last 12 month. Two more papers when including quadcopters following humans. Not exactly the hottest topic. I still think that it is an ability that we should consider including in @home since I feel it is very useful for domestic service robots.

All I'm saying is that I feel the first and foremost question should be. "Do I want my service robot to be able to do this?" "Is this a hot current research topic?" should be secondary.

EDIT: I think a problem that is the root of the ever changing rule book and this weird identity crisis that @home is in with the conflict of interests between SPLs and OPL and requests to be more like soccer is that there is no answer to the question: Is @home a research league first or an integration league first?

@justinhart
Copy link
Member Author

Having no research value means it has no value for @home. I have no idea why I should get my research group to work on stuff with no research value. I know I'm not alone in feeling this way.

@johaq
Copy link
Member

johaq commented Feb 5, 2019

I strongly disagree.

If there is absolutely zero research value, maybe. But I do not see a strong correlation between research value and what robots in @home are currently lacking.

If we go by research value we should think about turning @home into a deep learning challenge...
To me robocup@home was always about integration and that is just not a sexy research topic, I know. Does not make it have no value.

@justinhart
Copy link
Member Author

So, when participants interview for postdocs and professorships, and when professorships go up for tenure, and they don't have many publications because they're spending time on RoboCup@Home, how do you suppose that they get hired? How do you suppose that we justify the expenditures of sponsors, or spending research funds on this?

@YuqianJiang
Copy link

I think the goal of @home is to make service robots do things they cannot already do. It does not have to be a hot research topic. Anything that service robots cannot already do and cannot be trivially implemented probably has a good research question behind it. If the research value of integration is not appreciated as much as we believe, this is our chance to make the case as a community. One thing we can do is to identify the research questions and novel ideas that can be sparked by @home tests. Having something written down in the rulebook will help us focus efforts.

@johaq
Copy link
Member

johaq commented Feb 5, 2019

I mean those are two different things.
Do I want none of the tasks to deal with current big topics? No, of course not.
Does every task necessarily have to deal with one? I think no.

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

I would not say research value. It's ambiguous and can mean anything or nothing. What's your goal? What are you researching for?

Example:

My robot tries every competition to follow a person without losing track of her

  • My goal is to publish a paper on person recognition and tracking
    • No, this is not publishable at all
  • My goal is to publish a paper on real-time tracking using embedded systems
    • Yes, this is publishable (the fact you're tracking a person is irrelevant).
  • My goal is to publish a paper on bipedal control using XYZ models
    • This is publishable if the robot is bipedal (whether is tracking the person or not is irrelevant).
  • My goal is to have a robot going side by side while having a chat regardless if it takes 50 years
    • Definitively not publishable

All 4 are research to me. Whether you use a novel approach or not, is pretty much irrelevant to @Home, we want stuff done. Each research group has its priorities.

Now if having a couple of lines stating: this test is about people recognition and tracking will make some atHomers happy, add them. You can still make publications about control or real time if it fits you.

@kyordhel
Copy link
Contributor

kyordhel commented Feb 5, 2019

So, when participants interview for postdocs and professorships, and when professorships go up for tenure, and they don't have many publications because they're spending time on RoboCup@Home, how do you suppose that they get hired? How do you suppose that we justify the expenditures of sponsors, or spending research funds on this?

@robotijn, you're the founder of the league, perhaps you can share your vision with us.

@komeisugiura, @iocchi, @swachsmu You're trustees and/or have years of professorship getting people involved in @Home. Your insight is welcome here.

@justinhart
Copy link
Member Author

justinhart commented Feb 5, 2019

Well, to push this further, what is the value in doing stuff with no research value? "We want stuff done," doesn't identify why we want stuff done.

@balkce
Copy link
Contributor

balkce commented Feb 5, 2019

If the issue is because we don't want to use the word "Research", we could always use the word "Focus" (which we were already using some years past). This would even help teams decide which test to go for considering their strengths.

If the issue is length, we could always establish that the "Focus" of the test can be no more than one line and not occupy a whole section. Something that can be inserted right below the overall description at the start of the page, something like:

"Focus: navigation under unknown circumstances."

@justinhart
Copy link
Member Author

@YuqianJiang and all. Well, right. The point is to make sure that we have some idea to assure that there is a point, and to identify what that point is ahead of time. I guess what I'm building to here is, supposing that we have a roadmap, how do we see all of these things fitting into that roadmap? Can we also do a triage on all of the tasks to see if the stuff is worthwhile. If it reduces to a few downloadable packages and a state machine saying, "Please put the thing into the thing for me," it's probably also a monumental waste of time.

@justinhart
Copy link
Member Author

Alright. I'm going to make some last commentary, since this is clearly an unpopular motion.

I never said that integration has no value.
I never said that what is tested has to be a current "hot research topic."
I said that was should identify the point of the test.

I don't want to fight about this, but I seriously don't get why this generated this level of resistance.

@swachsmu
Copy link

swachsmu commented Feb 8, 2019

Justin had closed this issue, but I am feeling not comfortable leaving it in this stage of discussion.

Thus, Let me make some final meta-comments from a more senior perspective on this discussion (as requested by @kyordhel).

  1. You are currently doing a very important job of making the rulebook more attractive addressing research groups, young scientists, public audience, and sponsors (in this ordering).
  2. RoboCup has been founded because of research. They looked for a challenge to compare scientific AI approaches, ideas, algorithms on a system level.
  3. RoboCup is now established as an event and organization for attracting young scientists for AI/Robotic topics (see also RoboCup Juniors) and promoting careers in research.
  4. RoboCup has attracted important sponsors because more and more companies see a value in the research done in RoboCup and in the education participants get in RoboCup.
  5. RoboCup also has the task of promoting research topics in the public so that research can be experiences. This also justifies why public funding should support RoboCup related research.

It is your task as TC to implement all 5 points in one rulebook, and this is not a simple task.

In the discussion there have been great ideas (even beyond the current rulebook):

  • a league-wide publication on a roadmap
  • a link to scientific conferences
  • relating the work done in RoboCup@Home to state-of-the-art
  • identify research question (including the topic of integration)

I would like to encourage you to think further in these directions.

Finally, one point on research vs. integration: system integration is a requirement for the kind of research we are doing in RoboCup. As a league, we should do whatever we can to promote system-oriented research. This is a strength and distinguishes us from other benchmarks that are only based on datasets. But system integration is not a goal for itself. We are having the standard platforms because this (should) simplify system integration to a certain degree. Ideally (in future), there should be system distributions which can be just used by any new team for a direct start into RoboCup@Home, so that the team can concentrate on their own research and contribute it to RoboCup. Currently, much work is spent on integration, but one point on the RoboCup@Home roadmap should also be to significantly reduce this amount (this is a research question in the topic of system integration).

Thus, I strongly want to encourage you to strenghten and keep the research aspect in the competition (and also using the name "research" for it) although it is a permanent struggle with being open to new teams, having teams on different levels, and placing system-oriented publications in the research
community. I think it is worth to fight for and I have made very good experiences in getting very motivated and highly skilled researchers as well as intesting research questions out of our RoboCup@Home activities.

@kyordhel kyordhel reopened this Feb 8, 2019
@balkce
Copy link
Contributor

balkce commented Feb 8, 2019

Damn @swachsmu, you made me tear up a bit there.

Full wholeheartedly agree.

@iocchi
Copy link
Member

iocchi commented Feb 19, 2019

Dear all,
thanks for the discussion. I think this thread should remain always open.

I totally agree with Sven's summary (and with Mauricio's comment ;-)))
Let me add that system integration cannot be the only goal of @Home since in this capability companies will be much better than us and probably also in organizing a competition for integration of components for service robots. We must lead a research-oriented competition for the many reasons that have been mentioned above, where system integration is an important aspect because we are using a system-level benchmarking methodology, but rulebooks and developed solutions should always focus on research.
I also agree that sometimes (but not always) robust old-fashioned solutions are more usable (sometimes even more effective) than new trends. Moreover, not all the combination of methods would work. Evaluating mutual dependencies of different technologies is also research (specifically, integrated research) in my opinion.

Finally, the research focus in the rulebook should be expanded. In order to avoid confusion between research content and technicalities of the rules, we can create a separate chapter for each test with a research perspective of the test. This chapter can include for example: the scientific relevance of the problem, similar tasks presented in literature, available solutions, experiments made in laboratories, other competition or benchmarking activities addressing this or a similar task, novelty of the @Home task,
progresses over the years, future directions, etc.
We are all researchers and a good way to focus on research when writing the rulebook is to provide a literature review about the competition tasks.
This will also help teams to publish more papers using RoboCup@Home scenario as a testbed. Sometimes good solutions shown in @Home are not transformed in scientific papers. In my opinion, it is not because of a lack of scientific value and research opportunities of @Home tasks, but it is due to the difficulty of putting the results in the context of the state of the art.

Let's keep improving the quality of the rulebook, of the solutions and of the papers we write using @Home scenario.

Best regards,
Luca.

@kyordhel kyordhel pinned this issue Apr 5, 2019
@MatthijsBurgh MatthijsBurgh removed their assignment Jan 28, 2020
@johaq
Copy link
Member

johaq commented Feb 21, 2020

Added focuses for all tests in #722.

@johaq johaq closed this as completed Feb 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants