Postshot uses Structure From Motion, Neural Response Field (NeRF), and Gaussian splats to create 3 dimensional models through which you can send Virtual Cameras through. It is being developed by Jascha Wetzel in Munich, Germany and is being shared (for now) as beta code. It's Discord is a melting pot of Photogrammetry folks, Game Developers, professionals and tinkers. The program itself is compatible with other, similar, software & folks are sharing multiprogram workflows that make my head spin. LOL.
I was led to this program by the developer of Lifecast Volurama ( https://lifecastvr.com/ )..but that will have to be a separate post. In a nutshell, you take a series of videos or overlapping photos while circling around your subject and it fairly quickly converts these into a model. The first image below, for example, is the GUI from a self portrait model showing me surrounded by a series of pyramid graphics indicating the location of the images used to create the model (of me). The source video from which this model was created was taken on my back deck, but I deleted all of the surrounding "splats" so I am isolated in black.
Below my image is the timeline for a virtual camera. In this example I am creating an "orbit" video where the camera moves in a circle around me. I actually create two videos---left and right---which I combine in the Vegas video editor to create a 3D video. Postshot cannot create 3D videos directly so I have an excel spreadsheet where I calculate the position and motion of the left and right virtual cameras. I set up a 360 frame clip and create Keyframes at frames 0, 1, 5, 45, 90, 135, 180, 225, 270, 315, 355, 359, and 360---i.e. every 45 degrees around the circle plus extra keyframes at 1, 5, 355, and 359 to help Postshot's smoothing algorithm fill in the rest of the arc. The right side of the image shows where the x,y,z location and rotation of the virtual camera is specified for each keyframe.
If the left camera follows a circular path around the subject then the location of the right camera 0.1 units to the right of the left camera is
XR=XL+0.1*Sin(angle) ZR=ZL-0.1*Cos(angle)
Where XL and XR are the left and right x locations, ZL and ZR are the left an right z locations, and angle is the X cameras rotation about the subject.
The resulting self portrait is linked below and shows a slight wobble that I'm told is a known issue in their queue to correct. I'm sharing another orbit video where I sent the virtual camera in a downward spiral around a bit of yard art.
One of the things I've been tinkering with is 360 photography and video. Postshot doesn't support this directly, but I've managed to do it as follows:
After creating the model I created 6 virtual cameras pointing up, down, left, right, front, and back with a 90 degree field of view square image. This constitutes a cubemap. Ideally, I would then use the FFMPEG V360 filter to convert the cubemap to an equirectangular projection normally used for 360 view panoramas and videos. Unfortunately, the exposure settings are inconsistent in the videos created by Postshot so the "seams" between the cube faces show up in the resulting 360 video. To work around this I used the stitching software PTGUI Pro in batch mode to convert each frame. PTGUI has the ability to combine cubemaps if each cube face image's name has the orientation appended to it. I used Vegas to render individual frames from each cubeface video then used the Bulk Rename Utility program to append the orientation to the frame names. PTGUI has the ability to blend the cubeface images, even though they don't overlap, which solves the visible seam problem. Finally, I combine the individual equirectangular images into a video using Vegas, add metadata using the Spatial Media Metadata Injector and here is the result:
Anyway...I've covered a lot of ground here. Please let me know if this is interesting and feel free to ask questions.
Wayne