Skip to content

Latest commit

 

History

History
4 lines (3 loc) · 323 Bytes

README.md

File metadata and controls

4 lines (3 loc) · 323 Bytes

Transformer from scratch - application in product recommendations

This repo contains a from-scratch Implementation of a Transformer Model in tensorflow/keras, following the original paper "Attention is all you need", in model.py. The model is applied in the recommendation-domain to predict products from a sequence.