-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKF for vision landing #132
Comments
Here I published a Vision Landing implementation using AprilTags: I also published a MAVLink Camera Simulator for testing. |
I found latency matters more when drone is closer to landing target. This is a trade-off, high resolution image improve marker detect range but increase latency. I am thinking about a 2 stage approach, use high resolution image in the beginning, then switch to low resolution when we are closing to target |
I believe you are refering to the "marker detection latency", ie. the time to detect and estimate the pose of the markers when they get close to the camera? But I'm refering to the overall latency problem (caused by video capture, encoding, transmiting, decoding, detecting and estimating pose, sending mavlink commands, processing mavlink commands). Imagine your drone takes a picture on Of course, the lazy solution could be to move slowly or try to reduce the overall latency with better hardware... But I believe that by using the correct model (EKF), we should be able to avoid moving in wrong directions and get the drone landing fast and precise even with high latency. Now, in your case, you just send the landing point to the flight controller and the EKF and the motion is implemented there (in Ardupilot, PX4, RosettaDrone, et. al.). In theory you just have to make sure that you are telling the flight controller the exact timestamp of the image used for the pose estimation and the flight controller should use this information not to move into this direction, but to adjust its estimation model parameters to predict the future pose of the landing point and to compute the direction where it should move to in the present. But, it seems to me that the implementation and probably also the MAVLink message definition (!) was or is still not mature. @fnoop said: "There was a particular PR to solve a vital issue that never got implemented and I had been trying to get this resolved for so long that I basically gave up and moved on with life at that point..". I'm researching... |
if you can get access to the drone information like Lat,Long,Alt, gimbal angle and heading(north, south, east, west) in degrees then you can use geometry to guesstimate the approximate GPS location then heads towards that location. Create a PID controller based on your vision algorithm to use pose estimation for more precision. |
Yes, this is alternative 1 b) use the drone positioning system (which internally uses all sensors and optical flow) + 3 a) use an absolute coordinate frame. How accurate and consistent is your drone's positioning system? |
Some info about DJI's internal EKF: If the GPS jumps, we could maybe fuse the velocities to estimate the drone's attitude. @The1only what is your experience with KF? |
We will probably work with this implementation (WIP): |
I made some tests with a DJI Mini SE:
Thus, we will have to estimate the drone's position only based on set velocities (virtual sticks actions) and use the visual estimated pose to adjust the model. |
I finally came up with this algorithms: |
I'm using visual pose estimation of Apriltags for precision landing (https://github.com/kripper/vision-landing-2).
I see at least 3 problems to address:
If the error of the positioning system is too big, maybe it makes sense to only use visual pose estimation?
the visual estimated pose of the marker (=> drone attitude) is never the current pose because of the overall latency (the processed image was taken in the past). This is a big problem especially when you are rotating and translating at the same time, because "forward" and "right" are highly dependent on the current yaw (calculating motion based on a previous yaw will move us to a completely wrong direction). This means, we would have to do corrections on the model using information from the past, to be able to use the model in the future (prediction). I believe EKF provides this solution.
which frame model to use? If the positioning system is accurate, an a) absolute frame model would be the best. Otherwise, if we use vision, maybe a b) frame relative to the target is more convenient?
IMO absolute positioning is more convenient because it allows us to compare positions of 3 or more objects (e.g. the drone + two other objects).
Most precision landing solutions just send the markers pose to the flight controller. In our case, we will have to implement the motion control in Rosetta using virtual stick commands.
The text was updated successfully, but these errors were encountered: