You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly I would like to thank you for this fantastic work!
I am not an expert, I am more of a user of dependency parsing than a researcher but I NEED (I try to build true semantic parsing) accurate dependency parsing.
As you know, the number 1 SOTA:
Label Attention Layer + HPSG + XLNet (Mrini et al., 2019) has a LAS of 96.26.
While this is correct, it is not accurate enough for many semantic downstream tasks !
So I'm looking for the future state of the art, what do you think would be the most promising?
I'm really into merging the best ideas from others SOTA into a new state of the art, being the best of all.
But some techniques are incompatible with others.
So let me ask some noob questions:
Could your crfparser benefit from using XLnet? From using HPSG? And/or from using a label attention layer?
The text was updated successfully, but these errors were encountered:
Actually I am the one that suggested the HPSG paper to experiment with XLnet instead of BERT (and it gave accuracy gains).
I suggested to him two other followup experiments but he never took the time to do it.
So let me share them with you:
Ranger achieve huge accuracy gains on computer vision tasks but sadly, culturally almost no NLP researcher use it (or is aware of its existence)
So it might need some fine tuning for transformers e.g maybe that gradient centralization will need to be disabled (or maybe it will be the contrary)
related lessw2020/Ranger-Deep-Learning-Optimizer#13
But I do believe that the first researcher that will fine tune Ranger for Nlp tasks / transformers will be able to improve the SOTA on many tasks for free.
Hi, thanks for your suggestions.
Intuitively crfpar may benefit from PLMs like XLNet. But I didn't conduct the experiments on them. I will let you know if the experiments are completed. I have tried the joint framework of dependency and constituency like HPSG on crfpar but found very little gains.
Firstly I would like to thank you for this fantastic work!
I am not an expert, I am more of a user of dependency parsing than a researcher but I NEED (I try to build true semantic parsing) accurate dependency parsing.
As you know, the number 1 SOTA:
Label Attention Layer + HPSG + XLNet (Mrini et al., 2019) has a LAS of 96.26.
While this is correct, it is not accurate enough for many semantic downstream tasks !
So I'm looking for the future state of the art, what do you think would be the most promising?
I'm really into merging the best ideas from others SOTA into a new state of the art, being the best of all.
But some techniques are incompatible with others.
So let me ask some noob questions:
Could your crfparser benefit from using XLnet? From using HPSG? And/or from using a label attention layer?
The text was updated successfully, but these errors were encountered: