Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the trained model? #2

Open
nissansz opened this issue May 4, 2023 · 5 comments
Open

How to use the trained model? #2

nissansz opened this issue May 4, 2023 · 5 comments

Comments

@nissansz
Copy link

nissansz commented May 4, 2023

No description provided.

@twelfth-star
Copy link
Owner

If the trained CNN model is saved as ../data/models/cnn_model.torch, then you can use the following code

import torch

cnn_net = torch.load('../data/models/cnn_model.torch')

img = Image.open(img_path)
img = img.copy().convert("L")

label = cnn_net(img)

The label will be the ID of the font.

@nissansz
Copy link
Author

nissansz commented May 4, 2023

Do you have pre-trained model?
How to use onnx version for such model?

I tried https://github.com/HemingwayLee/deepfont-implement, and the result seems not correct for Japanese fonts.

@twelfth-star
Copy link
Owner

I have trained a toy model for Japanese fonts months ago. You can download the model here. It can only recognize the following five fonts. You can download the fonts here.

  • GenEiAntiqueNv5-M.ttf
  • GenEiKoburiMin6-R.ttf
  • GenEiLateMinN_v2.ttf
  • GenEiMGothic2-Bold.ttf
  • GenEiMGothic2-Regular.ttf

I have refactored my code and I am currently trying to train a larger model which can recognize 169 Japanese fonts. I may release it soon.

@nissansz
Copy link
Author

nissansz commented May 9, 2023

Good. Will wait for your large model to test.
Ms gothic etc.

@nissansz
Copy link
Author

import torch

cnn_net = torch.load('../data/models/cnn_model.torch')

from PIL import Image

cnn_net = torch.load(r'L:/font/cnn_model.torch')

img = Image.open(img_path)

img = Image.open(r'F:/24.png')
img = img.copy().convert("L")

label = cnn_net(img)
print(label)

报错===========================================

D:\Python3.8.8\python.exe F:/pycharm2020.2/universal-font-recognition-main/eval.py
Traceback (most recent call last):
File "F:/pycharm2020.2/universal-font-recognition-main/eval.py", line 12, in
label = cnn_net(img)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\pycharm2020.2\universal-font-recognition-main\utils\CNN.py", line 48, in forward
X = self.Cu(X)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Python3.8.8\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (Image, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:

  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (!Image!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)
  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (!Image!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)

Process finished with exit code 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants