Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all articles
Browse latest Browse all 50

Stereo camera, preprocessing, transforms and frames = pointcloud?

$
0
0
Hello, I am trying to avoid the simple questions, so trust me; I've really tried to figure out this for myself. But I find this to be am beyond my understanding. **Situation** - I am acquiring BGR-preprocessed stereo view, but this has raised some issues for me. While the camera software is aimed towards Igloo (and Jade's?) "uvc-camera". For concretizing purposes, let's assume the simplest version of "differential wheeled robot"; with a base footprint, base link etc - as frequently addressed in tutorials. My goal is to achieve perception of the environment utilizing as much as possible from generic packages,., The camera is published under the /stereo/ namespace 1) How do you describe a stereo camera through xacro/urdf? - Is the camera "one"; with e.g. "optical frame" as origin? Or should the left and right camera be described in a separate manner? - Which frames/transforms am I advised to implement? - Should I utilize a dedicated driver? 2) I am finding a recurring error-messages relating to the packages expecting a colored image. At the same time I am facing all these other challenges, making it impossible for me to deduce whether this is a cause or a consequence.' - I have implemented stereo_image_proc ; but without these issues addressed I find it natural that I don't get any disparity or pointcloud data. 3) If colored images are required; could it be an idea to process the BGR-frames captured; use OpenCV to create a colored image intensifying parameters of interest, and then feeding these to the stereo-vision algorithms at a lower framefrate? (I have managed to utilize the BGR images for cv-processing by using a "cv_bridge"-node). 4) Related to the abovementioned; how do I successfully propagate odometry / pose in a simple conceptual system, where an ekf-node provdes filtered sensordata? I'm assuming I should remap the topic somewhere, but I'm struggling to catch the essential factor here. 5) In general as with the camera; I'm struggling with the countless possibilities relating to frames, tf's and dedicated drivers. Should I use a kobuki or a differential driver? What are the main pro's and con's? (It should be mentioned that I won't receive my wheel encoders before the end of next week, so for now I will have to estimate odometry. ) I hope I'm not completely off track here, thanks in advance. Please notify me if I should change and/or elaborate on any of these issues.

Viewing all articles
Browse latest Browse all 50

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>