Skip to content

RuntimeError: Sizes of tensors must match except in dimension 1. #1

@cta5425

Description

@cta5425

Hi Ulrich,

Many thanks for your contribution! I have a question regarding the methodology in general and another more specific question. Why did you choose to combine CINN with Variational Autoencoder and did not use a conditional variational Autoencoder?

Also, I tried to run the jupyter notebooks provided, they all worked fine, except in pose estimation example in pose_estimation.ipynb. It occured on the line: zz = torch.randn(batch_size, 12).to(device). I saw in some other commits, that you sat the batch_size = 1, when I do the same I avoid the error but the predictions are far away from true. When the batch_size = 32 as it was, I get the following error:

`
Ground truth pose:
tensor([[-0.9466, 0.0161, -0.3221, -1.1683],
[-0.3225, -0.0472, 0.9454, 3.4293],
[ 0.0000, 0.9988, 0.0498, 0.1808],
[ 0.0000, 0.0000, 0.0000, 1.0000]], device='cuda:0')

RuntimeError Traceback (most recent call last)
Cell In [13], line 99
97 # ------ Finally, we compare the results ------
98 print("Ground truth pose:\n",new_pose)
---> 99 pred_pose = tau.reverse(zz, z).squeeze(-1).squeeze(-1)
100 pred_pose = to_pose(pred_pose)
101 print("---------------\n Predicted pose: \n", pred_pose)

File ~/local-repo/AutoNeRF/models/cinn.py:80, in ConditionalTransformer.reverse(self, out, conditioning)
78 def reverse(self, out, conditioning):
79 embedding = self.embed(conditioning)
---> 80 return self.flow(out, embedding, reverse=True)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:326, in ConditionalFlow.forward(self, x, embedding, reverse)
324 else:
325 for i in reversed(range(self.n_flows)):
--> 326 x = self.sub_layers[i](x, hconds[i], reverse=True)
327 return x

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:235, in ConditionalFlatDoubleCouplingFlowBlock.forward(self, x, xcond, reverse)
233 h = x
234 h = self.shuffle(h, reverse=True)
--> 235 h = self.coupling(h, xcond, reverse=True)
236 h = self.activation(h, reverse=True)
237 h = self.norm_layer(h, reverse=True)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:110, in ConditionalDoubleVectorCouplingBlock.forward(self, x, xc, reverse)
108 x = torch.cat(torch.chunk(x, 2, dim=1)[::-1], dim=1)
109 x = torch.chunk(x, 2, dim=1)
--> 110 conditioner_input = torch.cat((x[idx_apply], xc), dim=1)
111 x_ = (x[idx_keep] - self.ti) * self.si.neg().exp()
112 x = torch.cat((x[idx_apply], x_), dim=1)

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 32 but got size 1 for tensor number 1 in the list.
`

Many thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions