Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

integrate lightning pose with anipose #223

Open
siddypot opened this issue Nov 21, 2024 · 18 comments
Open

integrate lightning pose with anipose #223

siddypot opened this issue Nov 21, 2024 · 18 comments

Comments

@siddypot
Copy link

I'm looking to convert the checkpoint file and .pkl to .h5 file format, has anyone come up with a solution for this?

@ksikka
Copy link
Collaborator

ksikka commented Nov 21, 2024

Hi @siddypot why do you need to do this?

@siddypot
Copy link
Author

I would like to use anipose with lightning pose for 3d tracking. Anipose strictly accepts the .h5 file format

@ksikka
Copy link
Collaborator

ksikka commented Nov 21, 2024

Ok, I'll look into this.

@siddypot
Copy link
Author

Thank you !

@themattinthehatt
Copy link
Collaborator

themattinthehatt commented Nov 21, 2024

@ksikka it might be worth reaching out to the anipose people instead and see if we can make a PR to allow anipose to accept the csv format (it really should anyways) - let's discuss tomorrow

@siddypot you're referring to the format of the pose predictions right? Or are you also referring to the model weights themselves?

@siddypot
Copy link
Author

I am referring to the model weights. Lightning pose is generating the checkpoint file (.ckpt) and within the ckpt there is a .pkl file

@themattinthehatt
Copy link
Collaborator

themattinthehatt commented Nov 22, 2024

@siddypot Maybe I am misunderstanding - anipose shouldn't need the LP checkpoint file. there is a larger anipose pipeline that runs inference with the pose estimation network and then runs triangulation on the pose estimation outputs. The first part will require LP integration with anipose, which we are currently thinking about, but it will be a bit more complex than just providing an LP checkpoint. On the other hand, you can run inference yourself with LP and then use the later part of the anipose pipeline to just run triangulation. That part will be easier to integrate.

Can you describe your current workflow a bit more? Are you running inference on new videos with Lightning Pose yourself, and then hoping to use those outputs in Anipose? Or do you want the anipose pipeline to take care of the inference as well?

@siddypot
Copy link
Author

After training LP I am left with 2d pose estimations in CSV. Anipose is expecting a DLC model. Based on the DLC model Anipose generates 2d pose estimation data in .csv, .pickle, and .h5 for all views and then based on that data anipose triangulates. Using LP I managed to get the .csv and the .pickle file, but anipose will not triangulate without all 3 files. I was hoping it would be a simple translation just to turn the LP data into a DLC model, or, as you said, take the 2d pose estimation data from LP and just use anipose for triangulation. Though no matter the step, I cannot get the LP data into Anipose, which is my biggest issue as of right now.

Sorry if I am not making too much sense, I am inexperienced with this technology.

@themattinthehatt
Copy link
Collaborator

No problem at all! We're very happy to make the integration between LP and Anipose much smoother. Can you point us to the place in the anipose code where you are running into issues?

@siddypot
Copy link
Author

siddypot commented Nov 22, 2024

model_folder in anipose config takes in the DLC project path. It would be great if we could get anipose to directly recognize the model folder of LP, but that may be a more difficult longer term project.

anipose analyze (line 141) invokes pose_videos. in pose_videos DLC is used, and I can't seem to change that without everything breaking

@themattinthehatt
Copy link
Collaborator

thanks for the pointers, we'll look into it, and get back to you early next week

@themattinthehatt
Copy link
Collaborator

@siddypot I took a look at anipose, and it will take a bit of work to integrate LP. This is on our roadmap, but we won't be able to get to this until after the holidays. In the meantime I would suggest looking at the docs for aniposelib, which is the backend for anipose. This exposes the actual tools much more clearly.

To go this route you'll need to run inference on videos yourself using LP (see for example here: https://lightning-pose.readthedocs.io/en/latest/source/user_guide/inference.html) and then you can follow the example in the aniposelib docs (https://anipose.readthedocs.io/en/latest/aniposelib-tutorial.html). You'll have to modify this line in the tutorial:

d = load_pose2d_fnames(fname_dict, cam_names=cgroup.get_names())

to load csv files from LP in the proper format, but after that the rest of the tutorial should look the same.

I'll make sure to keep you up-to-date on the anipose integration from our side.

@themattinthehatt themattinthehatt changed the title lightning pose export to .h5 file format integrate lightning pose with anipose Dec 4, 2024
@YitingChang
Copy link

YitingChang commented Dec 4, 2024

I'm using anipose for triangulation and would like to follow this.
I wrote a simple function to convert LP csv file to hdf file.

def lp2anipose(lp_path, anipose_path):
    df = pd.read_csv(lp_path, header = None, index_col = 0)
    # Convert object data to float data
    arr = df.iloc[3:].to_numpy()
    new_arr = arr.astype('f')
    new_df = pd.DataFrame(data=new_arr)
    # Create multi-level index for columns
    column_arr = df.iloc[0:3].to_numpy() 
    tuples = list(zip(*column_arr))
    new_df.columns = pd.MultiIndex.from_tuples(tuples, names=df.index[0:3])
    # Save in hdf format
    new_df.to_hdf(anipose_path, key = 'new_df', mode='w') 

@themattinthehatt
Copy link
Collaborator

thanks @YitingChang! I think you might be able to simplify this by doing

df = pd.read_csv(lp_path, header=[0, 1, 2], index_col=0)
df.to_hdf(anipose_path, key='new_df', mode='w')

@YitingChang
Copy link

Great! I will do that.

@siddypot
Copy link
Author

siddypot commented Dec 5, 2024

I'm using anipose for triangulation and would like to follow this. I wrote a simple function to convert LP csv file to hdf file.

def lp2anipose(lp_path, anipose_path):
    df = pd.read_csv(lp_path, header = None, index_col = 0)
    # Convert object data to float data
    arr = df.iloc[3:].to_numpy()
    new_arr = arr.astype('f')
    new_df = pd.DataFrame(data=new_arr)
    # Create multi-level index for columns
    column_arr = df.iloc[0:3].to_numpy() 
    tuples = list(zip(*column_arr))
    new_df.columns = pd.MultiIndex.from_tuples(tuples, names=df.index[0:3])
    # Save in hdf format
    new_df.to_hdf(anipose_path, key = 'new_df', mode='w') 

@YitingChang Have you successfully triangulated using this h5 converter? If so could you provide documentation on how you did it?

After getting the h5 files for my videos, running

fname_dict = {
    'A': 'viewA.h5',
    'B': 'viewB.h5',
    'C': 'viewC.h5',
}

d = load_pose2d_fnames(fname_dict, cam_names=cgroup.get_names())

score_threshold = 0.5

n_cams, n_points, n_joints, _ = d['points'].shape
points = d['points']
scores = d['scores']

bodyparts = d['bodyparts']


points[scores < score_threshold] = np.nan

points_flat = points.reshape(n_cams, -1, 2)
scores_flat = scores.reshape(n_cams, -1)

p3ds_flat = cgroup.triangulate(points_flat, progress=True)
reprojerr_flat = cgroup.reprojection_error(p3ds_flat, points_flat, mean=True)

p3ds = p3ds_flat.reshape(n_points, n_joints, 3)
reprojerr = reprojerr_flat.reshape(n_points, n_joints)

from here

doesn't seem to do anything at all

@YitingChang
Copy link

YitingChang commented Dec 6, 2024

@siddypot Yes, I have successfully triangulated using this converter! I first create a configuration file. Then, I set the paths to data (see below) and use the triangulate function directly.

config_file: path to the configuration file
calib_folder: path to the calibration folder
video_folder: path to the video folder
pose2d_folder: path to the 2d pose folder (h5 files)
output_fname: path to the output file (.csv)
camera_names: a list of camera names

from anipose.triangulate import triangulate
import toml

# Load config file
config = toml.load(config_file)

# Create file name dictionary 
pose_2d_files = glob(os.path.join(pose2d_folder, '*.h5'))
fname_dict = dict(zip(sorted(camera_names), sorted(pose_2d_files)))

# Triangulate
triangulate(config, calib_folder, video_folder, pose_folder,
                fname_dict, output_fname) 

@themattinthehatt
Copy link
Collaborator

@siddypot just wanted to check in to see if you've tried this out yet. I've talked with the anipose people and will work on integrating LP+Anipose sometime in January.

Btw would you mind telling me which lab you're from and what kind of data you're working with? The LP team is beginning to work a lot more functionality for multicamera setups so curious about the needs different people have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants