Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server architecture #146

Closed
stefsmeets opened this issue Dec 19, 2022 · 5 comments · Fixed by #152
Closed

Server architecture #146

stefsmeets opened this issue Dec 19, 2022 · 5 comments · Fixed by #152

Comments

@stefsmeets
Copy link
Contributor

stefsmeets commented Dec 19, 2022

Hi all,

With #150 we want to add conversational entity linking to the server, and there is also work being done by @eriktks to have a BERT version for linking and disambiguation (in addition to flair). To support this with the current setup of the server is a bit tricky.

One of the issues is that the server and the logic are interwoven, so adding new features is difficult. To avoid hacking around too much I would like to update the server to something that is easier to update and maintain.

I'm working on a branch that re-implements the server using pydantic/fastapi, This is more performant, has built-in data validation and api documentation. I believe this will make it much easier to extend the server later on.

To avoid breaking existing functionality, I tried to keep the first iteration the same as it is currently. Let me know if there are any use cases that I should be aware of.

I would be curious to hear your thoughts, and also your input in what you think the server should look like and what other functionality it should support.

@stefsmeets
Copy link
Contributor Author

stefsmeets commented Dec 19, 2022

This made implementing the conversational entity linking into the server much easier. See this branch: https://github.com/informagi/REL/blob/fastapi-conv-el/src/REL/server.py#L95-L109

@KDercksen
Copy link
Contributor

Great idea, I also used FastAPI with Pydantic for another project and very much prefer it over hand-rolling a server!

One nice functionality is the automatic documentation page (root_url/docs) where e.g. request and response schemas are documented. Maybe we could add a specification for various response schemas as well, on top of the request schema there is now.

See here: https://fastapi.tiangolo.com/tutorial/response-model/

@stefsmeets
Copy link
Contributor Author

stefsmeets commented Dec 20, 2022

Thanks for the pointer about the response schemas! I have also extensively used pydantic for another project, but this is the first time using fastapi, so I'm still exploring what it can do :-)

Given that #147 pretty much works as is, what would it take to get that merged?

I know that there are some docs that should be updated:

@stefsmeets
Copy link
Contributor Author

Suggestion from @hasibi is to mimick WAT-API: https://sobigdata.d4science.org/web/tagme/wat-api

@stefsmeets
Copy link
Contributor Author

stefsmeets commented Jan 30, 2023

I'm not sure if implementing response models is possible with a flat api. Would be easier to do if the mode is the endpoint, i.e. the request points to https://localhost:5555/ne or https://localhost:5555/conv. With a flat end-point you sort of have to pile all response models together and hope that it works for all cases. I think this will be a nightmare to debug in the future.

Possible solutions:

  1. forget about response models
  2. forego a flat api, and have endpoints for each mode.

Originally posted by @stefsmeets in #152 (comment)

Any thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants