将colmap空三导入blender
将colmap空三导入blender主要问题是姿态系统的转换经实验发现:外参colmap的外参四元数计算得到的R为Rc2w,即相机到世界的R。Tx,Ty,Tz是原点在相机坐标系下的坐标,即P=[R|t]中的t相机原点在世界系下的坐标为:C = -Rw2c·t其中Rw2c=Rc2w的转置或逆内参blender不支持skew参数焦距f = len/senor_size_max_width_or_heg
将colmap空三导入blender
主要问题是姿态系统的转换
经实验发现:
外参
colmap的外参四元数计算得到的R为Rc2w,即相机到世界的R。
Tx,Ty,Tz是原点在相机坐标系下的坐标,即P=[R|t]中的t
相机原点在世界系下的坐标为:C = -Rw2c·t
其中Rw2c=Rc2w的转置或逆
内参
blender不支持skew参数
焦距f = len/senor_size_max_width_or_heght*max(heiht,width)
主点由shift_x和shift_y表达
注意
colmap使用的cv系,对应的图像坐标系统为x向右,y向下。
blender实验的cg系,对应的图像坐标系统为x向右,y向上。
因此,旋转矩阵需要乘以一个变化矩阵,即:
R_bcam2cv = Matrix(
((1, 0, 0),
(0, -1, 0),
(0, 0, -1)))
实验
R_bcam2cv = Matrix(
((1, 0, 0),
(0, -1, 0),
(0, 0, -1)))
print(R_bcam2cv)
quat_a = mathutils.Quaternion((0.979036, 0.0658575 ,-0.189701 ,0.0341345))
location = Vector((3.62986 ,0.228074 ,-0.737925))
rotation = quat_a.to_matrix()
location = -1*rotation.transposed() @ location
rotation = rotation.transposed() @ R_bcam2cv
print(location)
instricstr = "2 2 3072 2048 2777.04 2756.82 1536 1024 "
exstric = "1 0.973498 0.0553783 -0.21697 0.0464699 4.51543 0.363207 1.23276 "
instric = instricstr.split()
instrics = np.array(instric)
instrics = instrics.astype(np.float)
instric = instrics
print(instric)
#cam.dof_object = empty
#scene.render.resolution_percentage = scale * 100
# create a new camera
bpy.ops.object.add(
type='CAMERA',
location=location)
ob = bpy.context.object
ob.name = 'CamFrom3x4PObj'
cam = ob.data
cam.name = 'CamFrom3x4P'
cam.type = 'PERSP'
#cam.lens = instric[4]/instric[2]*2.
#cam.lens_unit = 'MILLIMETERS'
cam.sensor_width = 2
sensor_width_in_mm = cam.sensor_width
sensor_fit = 'AUTO'
get_sensor_fit(sensor_fit, instric[2], instric[3])
cam.sensor_fit = 'HORIZONTAL'
print(cam.sensor_fit)
#cam.shift_x = 0
#cam.shift_y = 0
cam.clip_start = 1.0
cam.clip_end = 250000000.0
w = instric[2]
h = instric[3]
f_x = instric[4]
f_y = instric[5]
c_x = instric[6]
c_y = instric[7]
cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = (c_y - 0.5 * h) / w
cam.lens = f_x / w * sensor_width_in_mm
pixel_aspect = f_y / f_x
scene = bpy.context.scene
scale = 1
scene.render.resolution_x = float(instric[2]) / scale
scene.render.resolution_y = instric[3] / scale
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect
ob.matrix_world = Matrix.Translation(location)@(rotation.to_4x4())
location, rotation = scene.camera.matrix_world.decompose()[0:2]
print(location,rotation)
原始影像与渲染影像:
参考:
https://docs.blender.org/api/current/bpy.types.Camera.html
https://github.com/zju3dv/pvnet-rendering/blob/master/blender/blender_utils.py
https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/
更多推荐
所有评论(0)