Skip to content

Temp2 #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Temp2 #4

wants to merge 3 commits into from

Conversation

y-veys
Copy link
Collaborator

@y-veys y-veys commented Apr 28, 2025

This merge will add two features to main: a pick skill that uses YOLOWorld and a place skill that uses the impedance controller written by Adelene. The code has been reformatted and passed through the linter.

I checked out temp2 in the ADT4 repo and it didn't break anything related to the fake spot (i.e., visualizing the spot in a prior scene graph and building a scene graph from a bag).

y-veys added 3 commits April 27, 2025 10:17
Added YOLOWorld pick functionality

Added detectuion utils python

Added impedance control for placing

Revert back to working spot_executor_ros.py

Add if spot_fake_interface for publish pose to work
@y-veys y-veys requested a review from GoldenZephyr April 28, 2025 20:00
@y-veys y-veys assigned y-veys and unassigned y-veys Apr 28, 2025
Copy link
Collaborator

@GoldenZephyr GoldenZephyr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, let me know your thoughts on the couple of comments.

@@ -33,10 +34,18 @@ def __init__(
take_lease=True,
set_estop=False,
verbose=False,
semantic_model_path="../data/models/efficientvit_seg_l2.onnx",
semantic_model_path="/home/rrg/dcist_ws/src/awesome_dcist_t4/spot_tools/spot_tools/data/models/efficientvit_seg_l2.onnx",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we handle these paths a little more generally? I think we should either 1) Make the path optional (e.g., default to None), and skip loading the yolo model when it's not set. This might be useful if we want to be able to run Spot's non-pick skills without having to load yolo / set the weights, or 2) Make the path not optional. And then either way the hard-coded RRG path can live higher-up (in the test_spot or something?)

labelspace_map=self.labelspace_map,
debug=self.debug,
)
# success = object_grasp(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have at least some faith that object_grasp_YOLO works / will work soon, we should delete this commented out part.

@@ -0,0 +1,2 @@
# Given spot, name of camera, and an object name, this script will return the bounding box of the object
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

# Draw bounding box and label
# cv2.rectangle(annotated_img, (x1, y1), (x2, y2), (0, 255, 0), 2)
label = f"{r.names[class_id]} {best_confidence:.2f}"
cv2.putText(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if the opencv drawing works if you are running in ROS? In ROS 1 at least, opencv drawing would not work from callback threads.

If the opencv visualization doesn't work in ROS, we should consider dealing with the feedback using the FeedbackCollector

. The opencv display code can go in a function there that gets called inside the skill. The visualization would work the same way when you're not using ROS, but it would enable us to pipe the image to RVIZ or something when we run with ROS.

@@ -254,13 +255,37 @@ def tf_lookup_fn(parent, child):
10,
)

if not use_fake_spot_interface:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a separate param for this? publish_spot_tf or something? We want the option of getting the TF from the new camera/TF interface from Nathan

self.heartbeat_pub = self.create_publisher(NodeInfoMsg, "~/node_status", 1)
heartbeat_timer_group = MutuallyExclusiveCallbackGroup()
timer_period_s = 0.1
self.timer = self.create_timer(
timer_period_s, self.hb_callback, callback_group=heartbeat_timer_group
)

def publish_pose(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we extend this to be responsible for publishing the fake spot TF as well? I think it's too confusing if we publish the fake spot TF from inside FakeSpotRos but the real TF from here. Another option would be to add a SpotRos that is analogous to FakeSpotRos, which would nicely keep the non-executor functionality out of this file, but maybe that overhead isn't worth it since I don't think anything else would go into SpotRos.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants