Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 50 articles
Browse latest View live

Controller for robots

$
0
0
Hi, i found two ways to setup a controller to drive around with my mobile platform robot and am a bit confused. First I found the way to create a new xacro file and will it with a gazebo-tag and options for the controller:100.0/joint_base_wheel_front_leftjoint_base_wheel_front_rightjoint_base_wheel_rear_leftjoint_base_wheel_rear_right${2*0.77}${2*wheel_radius}heros_base_link130cmd_velodomodom1 Whats this file libgazebo_ros_skid_steer_drive.so? The second way is to load a controller in my gazebo launch file like that: Then I had this .yaml file for all the settings: robot_joint_publisher: type: "joint_state_controller/JointStateController" publish_rate: 50 robot_velocity_controller: type: "diff_drive_controller/DiffDriveController" left_wheel: ['joint_base_wheel_front_left', 'joint_base_wheel_rear_left'] right_wheel: ['joint_base_wheel_front_right', 'joint_base_wheel_front_right'] publish_rate: 100 pose_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] twist_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] cmd_vel_timeout: 0.25 # Base frame_id base_frame_id: robot_base_link # Odometry fused with IMU is published by robot_localization, so # no need to publish a TF based on encoders alone. enable_odom_tf: false # Wheel separation and radius multipliers wheel_separation_multiplier: 0.580 wheel_radius_multiplier : 0.32 # Velocity and acceleration limits linear: x: has_velocity_limits : true max_velocity : 0.9 # m/s has_acceleration_limits: true max_acceleration : 3.0 # m/s^2 angular: z: has_velocity_limits : true max_velocity : 2.0 # rad/s has_acceleration_limits: true max_acceleration : 6.0 # rad/s^2 In option 1 I added the topic to publish the odometry and so on. In the second way i didnt. Where does ROS know how to publish those things? To use all of this I have to add a transmission tag to my wheels like that. Is that right? transmission_interface/SimpleTransmissionhardware_interface/VelocityJointInterface100 Are this two described ways the same thing or does it make any difference to do it like one of this ways? Are these ways common to setup the control setting for a robot or did i do incorrect things? Thanks for your help Max

how I convert string to float?

$
0
0
I'm using arduino to send values from 0.00 to 5.00 and I'm trying to read them but I get this error raise ROSSerializationException(str(e)) rospy.exceptions.ROSSerializationException: : 'required argument is not a float' when writing 'x: "0.92" this is the code, how can I convert the value to float? def __init__(self): rospy.init_node('my_tf_broadcaster') self.ser = serial.Serial('/dev/ttyACM0', 115200) self.get_odometry() def get_odometry(self): self.pub_tf = rospy.Publisher("/tf", tf.msg.tfMessage) change = 0.0 while not rospy.is_shutdown(): # Run this loop at about 10Hz rospy.sleep(0.1) message = self.ser.readline() t = geometry_msgs.msg.TransformStamped() t.header.stamp = rospy.Time.now() t.header.frame_id = "odom" t.child_frame_id = "base_link" t.transform.translation.x = message.rstrip() #trying to read this value t.transform.translation.y = 1.0 t.transform.translation.z = 0.0 t.transform.rotation.x = 0.0 t.transform.rotation.y = 0.0 t.transform.rotation.z = 0.0 t.transform.rotation.w = 1.0 tfm = tf.msg.tfMessage([t]) self.pub_tf.publish(tfm)

Can I use motors without encoders (open loop) for base of robot?

$
0
0
I have read that i need to publish odometry data so that the robot knows its current position. I have encoders built into the motor so I know that the steps i am sending will be accurate. Is there a way I can publish the goal state as the current state and will that work?

Robot Localization with Odometry and UM7

$
0
0
Hello all, I am trying to get localization to work for a custom car like robot. I am able to publish odom from wheel encoders and imu/data from my UM7 imu sensor. I would like to fuse together these two data using the [robot localization](http://docs.ros.org/indigo/api/robot_localization/html/index.html) package and then show it with base_link in rviz. Without proper localization of base_link, I will not be able to run the navigation stack properly. In my current setup, rviz is able to show forward and backward movement with base_link; however, when I manually rotate the um7 imu sensor, it does not reflect within rviz. I can see that the odometry/filtered topic is receiving input from the um7 but how do i visualize this in rviz? My end goal is to place this um7 on my custom car like robot. I want rviz to display the base_link mimicking its real life counterpart in terms of position/orientation and velocities (moving forward/backward, turing left/right...etc). It is to my understanding that this is what the robot localization package is for, please correct my understanding if i am making a mistake. Here are two videos describing my problem: [1st video](https://www.youtube.com/watch?v=Sa5ZqVbpd6o&feature=youtu.be) shows the screen recording from my computer. You can see that moving forward and backward is okay, but I do not know how to turn the base_link. [2nd video](https://www.youtube.com/watch?v=40nO4phqiqI&feature=youtu.be) shows what I am trying to do Below is the configuration file for the robot_localization package: frequency: 50 sensor_timeout: 0.1 two_d_mode: true transform_time_offset: 0.0 transform_timeout: 0.0 print_diagnostics: true debug: false debug_out_file: /path/to/debug/file.txt publish_tf: true publish_acceleration: false odom_frame: odom # Defaults to "odom" if unspecified base_link_frame: base_link # Defaults to "base_link" if unspecified world_frame: odom # Defaults to the value of odom_frame if unspecified odom0: /odom odom0_config: [false, false, false, false, false, true, true, true, false, false, false, true, false, false, false] odom0_queue_size: 2 odom0_nodelay: false odom0_differential: false odom0_relative: false odom0_pose_rejection_threshold: 5 odom0_twist_rejection_threshold: 1 imu0: /imu/data imu0_config: [false, false, false, false, false, true, false, false, false, false, false, true, false, false, false] imu0_nodelay: false imu0_differential: false imu0_relative: true imu0_queue_size: 5 imu0_pose_rejection_threshold: 0.8 # Note the difference in parameter names imu0_twist_rejection_threshold: 0.8 # imu0_linear_acceleration_rejection_threshold: 0.8 # imu0_remove_gravitational_acceleration: true use_control: true stamped_control: false control_timeout: 0.2 control_config: [true, false, false, false, false, true] acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4] deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5] acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9] deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0] process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015] initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]

robot doesn t start moving

$
0
0
I have setup navigation stack on my robot with all the nodes up and running (move_base, amcl , costmap2d etc.). I am also able to move my robot using teleoperation (keyboard control) and see its motion in Rviz as well. Now, I have created a map using gmapping and loaded it with amcl and when I give 2D NavGoal command from Rviz, the robot does not move. It shows the planned path in Rviz but does not move. In the terminal, it prints 'Got new plan' many times and at the end prints 'Rotate Recovery behavior started'. And after that, nothing happens. Any ideas why the robot is not moving. Thanks

How to fuse encoder ticks + IMU for odometry?

$
0
0
Hi, I have an IMU which shows the orientation perfectly and also installed some optical encoders. When I move the robot straight, the encoders will show the movement more or less correctly. However, when I turn and then move, IMU first displays orientation then when I move after the turn suddenly my robot gets drifted back and then move. I guess that this is because encoders do their job separately but how can I fuse them together? This is my encoder tick callback (one of them, second will be similar): void Odometry::leftencoderCb(const std_msgs::Int64::ConstPtr& left_ticks) { double enc = left_ticks->data; if((enc encoder_high_wrap)) { lmult = lmult + 1; } if((enc > encoder_high_wrap) && (prev_lencoder orientation.x; orient_t = data->orientation.y; orient_z = data->orientation.z; orient_w = data->orientation.w; } And finally, here is my update code which updates odometry periodically: void Odometry::update() { ros::Time now = ros::Time::now(); double elapsed; double d_left, d_right, d, th,x,y; elapsed = now.toSec() - then.toSec(); if(enc_left == 0) { d_left = 0; d_right = 0; } else { d_left = (left - enc_left) / ( ticks_meter); d_right = (right - enc_right) / ( ticks_meter); } enc_left = left; enc_right = right; d = (d_left + d_right ) / 2.0; th = ( d_right - d_left ) / base_width; dx = d / elapsed; dr = th / elapsed; if (d != 0) { ROS_INFO("Distance has been changed!"); x = cos(th) * d; y = -sin(th) * d; x_final += (cos(theta_final) * x - sin(theta_final) * y); y_final += (sin(theta_final) * x + cos(theta_final) * y); } theta_final += th; geometry_msgs::Quaternion odom_quat ; odom_quat.x = this->orient_x; odom_quat.y = this->orient_y; odom_quat.z = this->orient_z; odom_quat.w = this->orient_w; geometry_msgs::TransformStamped odom_trans; odom_trans.header.stamp = now; odom_trans.header.frame_id = "odom"; odom_trans.child_frame_id = "base_footprint"; odom_trans.transform.translation.x = x_final; odom_trans.transform.translation.y = y_final; odom_trans.transform.translation.z = 0.0; odom_trans.transform.rotation = odom_quat; odom_broadcaster.sendTransform(odom_trans); // Odometry values: nav_msgs::Odometry odom; odom.header.stamp = now; odom.header.frame_id = "odom"; odom.pose.pose.position.x = x_final; odom.pose.pose.position.y = y_final; odom.pose.pose.position.z = 0.0; odom.pose.pose.orientation = odom_quat; odom.child_frame_id = "base_footprint"; odom.twist.twist.linear.x = dx; odom.twist.twist.linear.y = 0.0; //dy; odom.twist.twist.angular.z = 0.0; //dr; odom_pub.publish(odom); then = now; ros::spinOnce(); else ROS_INFO_STREAM("Not in a loop"); } Can someone suggest me how can I write odometry code more correctly?

Odometry for a wheelchair

$
0
0
Hi there, I'm planning to use the navigation stack package. I currently have the lidar attached to my base, a electric wheelchair. However, odometry seems to be required for the navigation package. I have searched around but seems like normal wheel odometry might not fit my case... Is there other good and **cheap** option (type and model) for that ? Or is there package can substitute the use of a physical odometry (would be great if there is a tutorial)Thanks!

How to combine LIDAR and IMU with Hector mapping.

$
0
0
Hi, I am a beginner to ROS trying to fuse the IMU for odometry along with a hokuyo LIDAR and hector mapping to create an accurate map. Currently the map works in rviz but the frames do not move continuously, Sorry for the links but I am unable to add the images manually. I am unsure where to begin and have just confused myself over the past week. I am using the xsens IMU package here [here](https://github.com/ethz-asl/ethzasl_xsens_driver), I'm trying to follow this tutorial for setting up hector mapping [here](http://wiki.ros.org/hector_slam/Tutorials/SettingUpForYourRobot) This is a graph of my [frames and nodes here](https://imgur.com/a/FOjJz8c) and these are the current nodes. These are my launch files for the [mapping](https://pastebin.com/dM5yfdze) [IMU](https://pastebin.com/CpQpW2L5) and the [Hector attitude to tf package](https://pastebin.com/PA8DM9m3) I am not quite sure what I am doing even after reading the docs, and I would really appreciate any pointers

fake encoder ticks from laser scanner?

$
0
0
Hi, this is my first post here I have a 4WD differential robot with encoders, but after testing i think the encoders dont work (except 1) So i thought go away with encoders altogether and use a 2D ydlidar, With hector slam rf2o_laser_odometry i can publish odom required for interfacing with Navigation Stack I also have a Kinect and will buy IMU My problem is how to implement Closed loop PID Control since i don't have encoders to measure velocity, can i infer velocity from the laser scan and use that for PID Control Also can i use this inferred velocity to infer fake encoder ticks for interfacing with Ros Control? I have a Jetson TX2 so the computing performance is good. Thanks a lot!!

Improve map created by Gmapping

$
0
0
Hi All, I implemented Gmapping algorithm for a featureless tunnel/pipe environment. Since LiDar/camera are not that effective in these environments, stable odometry is the prime data that can be used in such situations. I read there are many parameters in gmapping.launch.xml file that can be tuned to get the desired results. I even tried to tune the number of particles and map_update_interval and it really changed a lot. But there are many other parameters like minimumScore, linearUpdate, srr, srt...etc which might affect the map as well. But I'm not sure how to tune these for my application since there can be many possibilities. Can someone help me in this regard?
Viewing all 50 articles
Browse latest View live




Latest Images