Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于chatgpt辅助的消融实验 #2

Open
defensetongxue opened this issue Jun 7, 2023 · 0 comments
Open

关于chatgpt辅助的消融实验 #2

defensetongxue opened this issue Jun 7, 2023 · 0 comments

Comments

@defensetongxue
Copy link

在视频中提到,本地模型训练得出第一轮信息,压缩后给chat4利用泛化信息判断。

但我个人对于泛化信息能否帮助模型进行进一步筛选存在疑惑,理由是本地模型在见过本地获取数据后具有更加专业的能力,而chat所用的训练数据可能没有包括这部分的中文语料,导致本地模型在专有任务上表现能力更强。

具体的做法是仅使用本地模型输出有限个主题并直接作为结果。或者请问作者是否观测过chat是否在排除低效主题发挥作用。

总之感谢作者的知识和代码分享,让我获益颇丰。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant