Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model input does not match with RGB image #4

Open
yeohkc1995 opened this issue Sep 15, 2020 · 1 comment
Open

Model input does not match with RGB image #4

yeohkc1995 opened this issue Sep 15, 2020 · 1 comment

Comments

@yeohkc1995
Copy link

yeohkc1995 commented Sep 15, 2020

Hi. I am looking to deploy Fast Depth on a camera stream. I am having trouble with inputing an image into the model. I loaded the model (from original FD github), and tried to input a rgb mage, but the dimensions does not match up. Do you know what could be the issue?

import csv
import os
import time

import numpy as np
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.nn.parallel
import torch.optim
from Data import *

import criteria

cudnn.benchmark = True

import models
from metrics import AverageMeter, Result
from utils import *

from torchsummary import summary
import cv2

class MobileNet(nn.Module):
    def __init__(self, relu6=True):
        super(MobileNet, self).__init__()

        def relu(relu6):
            if relu6:
                return nn.ReLU6(inplace=True)
            else:
                return nn.ReLU(inplace=True)

        def conv_bn(inp, oup, stride, relu6):
            return nn.Sequential(
                nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
                nn.BatchNorm2d(oup),
                relu(relu6),
            )

        def conv_dw(inp, oup, stride, relu6):
            return nn.Sequential(
                nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
                nn.BatchNorm2d(inp),
                relu(relu6),
    
                nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
                nn.BatchNorm2d(oup),
                relu(relu6),
            )

        self.model = nn.Sequential(
            conv_bn(  3,  32, 2, relu6), 
            conv_dw( 32,  64, 1, relu6),
            conv_dw( 64, 128, 2, relu6),
            conv_dw(128, 128, 1, relu6),
            conv_dw(128, 256, 2, relu6),
            conv_dw(256, 256, 1, relu6),
            conv_dw(256, 512, 2, relu6),
            conv_dw(512, 512, 1, relu6),
            conv_dw(512, 512, 1, relu6),
            conv_dw(512, 512, 1, relu6),
            conv_dw(512, 512, 1, relu6),
            conv_dw(512, 512, 1, relu6),
            conv_dw(512, 1024, 2, relu6),
            conv_dw(1024, 1024, 1, relu6),
            nn.AvgPool2d(7),
        )
        self.fc = nn.Linear(1024, 1000)

    def forward(self, x):
        x = self.model(x)
        x = x.view(-1, 1024)
        x = self.fc(x)
        return x

model = MobileNet(relu6=True)
checkpoint = torch.load("mobilnet/mobilenet-nnconv5dw-skipadd-pruned.pth.tar", map_location='cpu')
model.load_state_dict(checkpoint, strict = False)
print((model))

img = cv2.imread('rgb.png') 

pred = model(torch.from_numpy(img))

The error message is: RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [224, 224, 3] instead

@vaibhawkhemka
Copy link

@yeohkc1995 Change the input shape to [1,3,224,224] and further feed into the network for evaluation.
Hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants