Hi all,
I have a mobile robot with multiple ultrasound and IR sensors on the front, side and at the back. It also has Rotational Encoders to generate Odometry data and IMU/GPS for localization. I would like the robot to navigate around the building with a given map using these sensors only (I don't have a laser sensor). My doubt is how can I interpret data from Ultrasound and IR sensors and use that for navigation. So, I would appreciate if someone can point me in the right direction and tell me about existing ROS packages, relevant links and other resources which will be helpful for the above mentioned problem.
I know this is a very general question and I am not looking for specific answers but just pointers to a good source from people who have worked on similar problems before.
Edit 1: One more thing, is it possible to improve localization using proximity sensors (something similar to amcl which uses laser scan). Maybe, generate fake laser scan data using proximity sensors and then use amcl to improve localization of the robot or maybe some other way. Does anyone has some insights on this?
Thanks in advance.
Naman
↧
Robot navigation using ultrasound and IR sensors
↧
move_base goal not reached
Hi,
I am trying to use move_base to move my robot to a particular point in the environment. While moving autonomously, the robot should build a map at the same time using the `gmapping` node. I am running on a Powerbot platform with the `rosaria` package to control and drive the robot. [This](https://www.dropbox.com/s/r542gvao54zyw9g/move_base.launch?dl=0) is the move_base.launch file that I am using to set the parameters of the `move_base` package. As you can see, I have disabled the recovery_behaviour and clearing_rotation parameters. The local planner that I am using is the DWA planner.
move_base.launch:
Then, I am using [this code](https://www.dropbox.com/s/ep17rcb3couik9y/simple_navigation_goals.cpp?dl=0) to move the robot 30cm along the x-axis of the robot and 20cm along the y-axis of the robot. My robot can only move in a 2 dimensional environment. I need to set up move_base to be able to give the robot a goal location as an (x,y) coordinate and it must move to that location.
sending_simple_goal.cpp:
#include
#include
#include
typedef actionlib::SimpleActionClient MoveBaseClient;
int main(int argc, char** argv){
ros::init(argc, argv, "simple_navigation_goals");
//tell the action client that we want to spin a thread by default
MoveBaseClient ac("move_base", true);
//wait for the action server to come up
while(!ac.waitForServer(ros::Duration(5.0)))
{
ROS_INFO("Waiting for the move_base action server to come up");
}
ROS_INFO("Connected to move_base action server!");
move_base_msgs::MoveBaseGoal goal;
//we'll send a goal to the robot to move 1 meter forward
goal.target_pose.header.frame_id = "base_link";
goal.target_pose.header.stamp = ros::Time::now();
double x;
double y;
double w;
double angle = 0;
x = 0.3;
y = 0.2;
w = 1; //cos(angle/2);
goal.target_pose.pose.position.x = x;
goal.target_pose.pose.position.y = y;
goal.target_pose.pose.orientation.w = w;
ROS_INFO("Sending goal");
ac.sendGoal(goal);
ROS_INFO("Goal has been sent. Now waiting for result.");
ac.waitForResult(ros::Duration(30.0));
ROS_INFO("Done waiting for result. Getting state now");
if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)
ROS_INFO("Hooray, the base moved %.2f meter along the x and %.2f meter along the y", x,y);
else
ROS_INFO("The base failed to move %.2f meter along the x and %.2f meter along the y for some reason", x,y);
return 0;
}
My problem with my current code is that, if given a goal to move 30cm along the x-axis only and 0cm along the y-axis, the goal is reached successfully. But, if, as explained, I set the goal to x=0.3 and y=0.2, the goal is never reached and the robot keeps spinning and move in a random fashion. As a result, the resulting map is a total mess, as you can see from the attached image.! [Result of move_base](/upfiles/14273608961990708.png)
Please can anyone help identify the problem? Also, can anyone help me understand how to use move_base to set an (x,y) coordinate in the map as a goal for the robot to reach?
Thanks in advance.
↧
↧
Ultrasound and IR sensors vs Kinect for Robot navigation
Hi all,
I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I only have Ultrasound, IR sensors and Kinect which I can use for navigation. I want to know which is better for navigation (using Ultrasound and IR sensors or Kinect) given that I am aiming for pretty good accuracy as well as it should not be very computationally expensive. In my opinion, Kinect will do a better job but my concern with Kinect is that it might be computationally very expensive as compared to Proximity sensors given that I have to run it on the NVIDIA Jetson TK1 board (https://developer.nvidia.com/jetson-tk1) but then again if I go with Proximity sensors, I have to use bunch of them and I don't know how effective and efficient that will be. Also, I am little worried about the dead band in case of Kinect which is around 50 cm which is way more than the dead band for Ultrasound sensors (~ 10 - 15 cm).
Any guidance regarding this will be appreciated.
Thanks in advance.
Naman
↧
Merge the output of Ultrasound sensor with hokuyo laser output for navigation
Hi all,
I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I will be using hokuyo laser sensor and Ultrasound sensors for navigation. Hokuyo will be the main sensor for navigation and Ultrasound sensors will be used to detect obstacles not seen by the hokuyo (like Overhangs, etc). So, my question is how can I merge the output of Ultrasound sensor which is of the type sensor_msgs/Range.msg with the hokuyo sensor output which is of the type sensor_msgs/LaserScan.msg ?
I think one of the ways can be to convert Range message to LaserScan message and then modify the original LaserScan message from the Hokuyo to incorporate this LaserScan message to come up with the final merged LaserScan message and use it for navigation. But I dont know how hard it will be specially merging of two LaserScan messages into one.
Is there any other (easier or better) way to accomplish this? Or if this is the only way, what is the best way to go about it?
Thanks in advance.
Naman Kumar
↧
Navigation Stack Computation from External Computer
Hello everyone!
I am working on a ROS Indigo project where we are going to be reading all of our sensor information from an Ubuntu Trusty install on a BeagleBone Black Rev C and sending that information over the appropriate topics (odometry, laser scan topics, etc) to an external computer over the network (also Ubuntu Trusty running ROS Indigo), which will then send back the appropriate twist messages to the base controller node that exists within the workspace on the BeagleBone Black. So, the only nodes that should be running on the BeagleBone are the IMU Node, the SICK LMS wrapper node, and the tf_configuration nodes (We still haven't figured out how to get those set up exactly, so insight on that is also appreciated).
I had been following the tutorial here: http://wiki.ros.org/navigation/Tutorials/RobotSetup
My only concern is that the only way for me to get my_robot_name_2dnav package to compile correctly with the 'move_base' dependency that is apparently required was for me to install 'ros-indigo-navigation' on the BeagleBone. My concern with this is that we don't want the BeagleBone to be computing any of this information as it is at almost 80% CPU usage just to read and send the LaserScan message from the LIDAR.
Am I doing this correctly? Or am I missing something to connect the navigation stack to the external computer that will be able to handle all of the processing necessary?
Thanks for all input
↧
↧
Adding range_sensor_layer to layered costmap for global planning
Hi all,
I am working on mobile robot navigation with the Hokuyo and Ultrasonic sensors. Hokuyo will be the main sensor for navigation and Ultrasound sensors will be used to detect obstacles not seen by the hokuyo (like Overhangs, etc). I already have the map of the environment. I have Rotational encoders to get the odometry information and IMU/UWB for localization.
I have added a range_sensor_layer (http://wiki.ros.org/range_sensor_layer) for the ultrasound sensor to the costmap (local and global). I am using the move_base (http://wiki.ros.org/move_base) package for navigation. The problem is that global planner still gives a path which passes through the obstacles detected by the ultrasound sensor and only the local planner takes care of avoiding that obstacle. The global planner takes care of the objects detected by the Hokuyo sensor and plans the path accordingly but ignores the obstacles detected by the ultrasound sensor while global planning. So, I want to ask how can I correctly include range_sensor_layer in the global costmap so that the global path does not give a path through the obstacles detected by the ultrasound sensor?
The relevant parameters are shown:
local_costmap_params.yaml
plugins:
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: base_scan
base_scan: {data_type: LaserScan, sensor_frame: /base_laser_link, topic: /base_scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08}
sonar_layer:
topics: ["/sonar"]
no_readings_timeout: 1.0
global_costmap_params.yaml
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::VoxelLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: base_scan
base_scan: {data_type: LaserScan, sensor_frame: /base_laser_link, topic: /base_scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08}
sonar_layer:
topics: ["/sonar"]
no_readings_timeout: 1.0
Update 1:
Gazebo simulation

Rviz Visualization

As you can see, costmap visualization of the sonar obstacles in RViz does not look correct. The entire region in front of the sonar should be an obstacle which is not happening.
Thanks in advance.
Naman Kumar
↧
control loop missed its desired rate, but with low CPU load
Hi, I've encoutered the following warning message continuously while my robot navigate itself toward the goal pose (by DWA local planner):
Control loop missed its desired rate of 10.0000Hz... the loop actually took 0.6461 seconds
Solutions suggested in earlier posts includes
1. reduce the resolution of costmaps (0.1m)
2. reduce the width and height of the local costmap (3x3m)
3. reduce vx_samples and vtheta_samples (4 and 10 respectively)
4. reduce controller_frequency (5Hz)
However, none of these adjustment helped to increase the rate of the control loop to even 5Hz. The rate is 2Hz on average. However,the CPU load for the move_base is low (10-15%), while rviz takes about 30-40% of the CPU. Because the CPU load for the whole computer is <50% during navigation, I wonder why the control loop takes so long. Your help is highly appreciated.
Below are my parameters
costmap_common_params.yaml
obstacle_range: 5.5
raytrace_range: 5.5
footprint: [[-0.15,-0.3], [0.6,-0.3], [0.7,0], [0.6,0.3], [-0.15,0.3]]
footprint_padding: 0.1
#robot_radius: ir_of_robot
inflation_radius: 0.60
resolution: 0.1
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: rplidar, data_type: LaserScan, topic: scan, marking: true, clearing: true}
local_costmap_params.yaml
local_costmap:
global_frame: odom
robot_base_frame: base_link
rolling_window: true
update_frequency: 2.0
publish_frequency: 2.0
static_map: false
width: 3.0
height: 3.0
global_costmap_params.yaml
global_costmap:
global_frame: map
robot_base_frame: base_link
update_frequency: 1.0
static_map: true
dwa_local_planner_params.yaml
controller_frequency: 5
recovery_behaviour_enabled: true
DWAPlannerROS:
# robot configuration parameters
acc_lim_x: 1.5
acc_lim_th: 1.5
max_trans_vel: 0.5
min_trans_vel: 0.25
max_vel_x: 0.5
min_vel_x: 0.25
max_rot_vel: 1
min_rot_vel: 0.7
# (care nothing about y)
max_vel_y: 0
min_vel_y: 0
acc_lim_y: 0
# goal tolerance
yaw_goal_tolerance: 0.2
xy_goal_tolerance: 0.2
# forward simulation
sim_time: 2.0
vx_samples: 4
vy_samples: 0
vtheta_samples: 10
sim_granularity: 0.1
angular_sim_granularity: 0.1
penalize_negative_x: false
# weights
occdist_scale: 10
↧
Costmap2D (mx, my) and parital mapping
Hi,
I have 2 question regarding the Costmap2d; (Turltlebot + Hokuyo Lidar for generating and maintaining the map)
1. suppose that i want to run my robot in a work place to search for a flat sign/object on the floor with a bottom looking scanner/sensor. i thought to make the process more manageable, it'd be reasonable to virtually set a confinement (e.g. a 5x5m) boundary so the robot does the search in each block and then if unsuccesful moves on to another 5x5m block. is it possible to **autonomously** add and remove a "virutal confinement" layer as i described, to the map?
2. to set goals for the move_base in order to search for the interesting object, how can i extract (mx, my) grid cell coordinates from a dynamic map? (to feed them as nav goals)
I hope i was clear, be happy to give more info if necessary, Thank you.
↧
autonomous navigation
I try to run teleop node and the autonomous navigation at the same time.But when i use 2d nav goal at auotonomous navigation,turtlebot doesn't move.What do you think about this problem?
↧
↧
How scan angle is measured and published in LaserScan msg?
Range of the distance of the obstacles are measured from output of laser sensor but how actually the scan angles are measured. do we use any separate sensor to get the value?
↧
robot_pose_ekf
hello,
I'm using turtlebot, distro hydro,i did the stack navigation with odometry, now i want working with odom-combined (robot_pose_ekf), is there anyone who did that, can help me.i didn't know what changes do I need to do.
Is what I have to change all odometry term to odom_combined term, and remove the transformation between odom and base-footprint since robot poses ekf node will make the transformation between odom_combined and base-footprint
Any help please !
Thanks in advance
↧
Calculating Odometry Data for the Navigation Stack
Hi Everyone,
I hope someone can help me answer a simple question. My custom robot delivers data. The data are velocity (m / s) and rotation velocity (deg / s or rad / s). I also recieve the distance the robot has traveled (mm). My problem is, I need to publish the odometry-data for the navigation-stack. I followed this tutorial (http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom). The tutorial calls for the vx and vy and vth coordinates. I have the vth coordinates but I need vx and vy and don´t know how to calculate them having only the velocity and the rotation velocity.
Thank you in advance for any advice!
↧
Problem with Transform-Publisher on custom Bot
Hi Everyone,
i hope someone can answer me a question. I have a custom robot and i would like to use the Navigation-Stack of ROS.
i follow this tutorial: (http://wiki.ros.org/navigation)
The Problem is that anything is incorrect on my Transform Publisher. The tf-tree is odom-->map-->base_link-->base_laser.
Here is my Transform-Publisher:
#include
#include
int main(int argc, char** argv){
ROS_INFO("starte tf_broadcaster!");
ros::init(argc, argv, "robot_tf_publisher");
ros::NodeHandle n;
ros::Rate r(100);
tf::TransformBroadcaster broadcaster;
while(n.ok()){
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)),
ros::Time::now(),"base_link", "base_laser"));
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 0), tf::Vector3(0.0, 0.0, 0.0)),
ros::Time::now(),"odom", "base_link"));
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 0), tf::Vector3(0.0, 0.0, 0.0)),
ros::Time::now(),"map", "odom"));
r.sleep();
}
}
If i start the Ros-Navigation the last Message is "odom received". All is ok!
So when i click on "2D Pose Estimate" the result is this:
https://www.dropbox.com/s/n211hzuvjrr4rdj/Screenshot%202015-04-29%2016.21.35.png?dl=0
The Odom-Frame and the base_laser-Frame are on two different places. Why?
if i still click on the button "2D Nav Goal" the robot rotates in a circle and i get an Error:
"Rotate recovery can't´t rotate in place because there is a potential collision. cost -1.00"
Here is the log:
https://www.dropbox.com/s/7d4fv3ad2h277at/%20log.png?dl=0
I think the Problem is the Transform-Publisher. Can anybody help me?
Thanks
Max
↧
↧
Navigation stack goal off global costmap
Hi all
I'm using the navigation stack in my robot and every time i send a goal using rviz (2D Nav Goal) it throws me the following:
> [ WARN] [1431013577.749213787]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ WARN] [1431013578.746803719]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ WARN] [1431013579.747597482]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ WARN] [1431013580.747194408]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ WARN] [1431013581.747623215]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ WARN] [1431013582.747012998]: The goal sent to the navfn planner is off the global costmap. Planning will always fail to this goal.>[ERROR] [1431013583.745273259]: Aborting because a valid plan could not be found. Even after executing all recovery behaviors
my global costmap is the following:
global_costmap:
global_frame: /rosphere/odom_combined
robot_base_frame: /rosphere/base_footprint
transform_tolerance: 5.0
update_frequency: 0.5
publish_frequency: 0.0
static_map: false
rolling_window: true
width: 100.0
height: 100.0
resolution: 0.2
and the same for the local costmap, only changing the width, height and resolution (20,20, 0.05 respectively)
The goal I'm sending is just one meter from the base on x-axis so the action should be just to move straight forward.
On Rviz the grid is set to 20 plane cell count and 1meter per cell.
I've changed the resolution, width and height of both costmaps and it's always the same. Any ideas? if you need more info just ask.
Salvador
↧
meter_scoring: move_ base navigation
Hi all,
I'm using move_base for 2D navigation and it's working.the only thing is that i keep getting the a warn about having meter_scoring not set. the warn is the following:
Trajectory Rollout planner initialized with param meter_scoring not set. Set it to true to make your settings robust against changes of costmap resolution
And in my base_local_planner_params i set it to true. I'm having good and bad results from time to time, could this be the problem? what can i do to stop having this warn??
Thank you for your time,
Salvador
↧
no range readings received warning while using range_sensor_layer
Hi all,
I am using Ultrasound sensors and Hokuyo laser for navigation. I am using [range_sensor_layer](http://wiki.ros.org/range_sensor_layer) for ultrasound sensors. The problem is when I publish the `sensor_msgs/Range` message and use the range_sensor_layer, I get the following warning and therefore `range_sensor_layer` is not able to modify the costmap:
[ WARN] [1431715266.304546076]: No range readings received for 389.80 seconds, while expected at least every 1.00 seconds.
I am not able to see why am I getting this warning, everything seems to be set up and working properly (described below).
`sensor_msgs/Range` message is published by the arduino on topic `/US1` with `frame_id: /us1` and the `move_base_node` is subscribing to it:
>> rostopic info /US1
Type: sensor_msgs/Range
Publishers:
* /serial_node (http://namankumar-Inspiron-5521:33329/)
Subscribers:
* /move_base_node (http://namankumar-Inspiron-5521:53748/)
* /rviz_1431714908738859698 (http://namankumar-Inspiron-5521:59073/)
Also, when I do `rostopic echo /US1`, I get the correct range values:
header:
seq: 6186
stamp:
secs: 1431715507
nsecs: 646532096
frame_id: /us1
radiation_type: 0
field_of_view: 0.34999999404
min_range: 0.019999999553
max_range: 3.0
range: 184.0
The **tf tree** is properly set up:

**global_costmap_params.yaml**
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true}
sonar_layer:
topics: ['/US1']
no_readings_timeout: 1.0
clear_threshold: 0.2
mark_threshold: 0.8
clear_on_max_reading: true
**local_costmap_params.yaml**
plugins:
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true}
sonar_layer:
topics: ['/US1']
no_readings_timeout: 1.0
clear_threshold: 0.2
mark_threshold: 0.8
clear_on_max_reading: true
Does anyone have any idea what is going wrong here? Any help will be appreciated.
Thanks a lot.
Naman Kumar
↧
range_sensor_layer can't transform from map to /us1
Hi all,
I am using Ultrasound sensors and Hokuyo laser for navigation. I am using [range_sensor_layer](http://wiki.ros.org/range_sensor_layer) for ultrasound sensors. The problem is when I publish the `sensor_msgs/Range` message and use the range_sensor_layer, I get the following error:
[ERROR] [1431956010.233195664]: Range sensor layer can't transform from map to /us1 at 1431956010.107522
I am using `static_transform_publisher` for the transformation between `/base_link` and `/us1` :
I am not able to see why am I getting this error as the **tf tree** is properly set up and there is a transformation from map to /us1:

**global_costmap_params.yaml**
plugins:
- {name: static_layer, type: "costmap_2d::StaticLayer"}
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true}
sonar_layer:
topics: ['/US1']
no_readings_timeout: 1.0
clear_threshold: 0.2
mark_threshold: 0.8
clear_on_max_reading: true
**local_costmap_params.yaml**
plugins:
- {name: obstacle_layer, type: "costmap_2d::ObstacleLayer"}
- {name: sonar_layer, type: "range_sensor_layer::RangeSensorLayer"}
- {name: inflation_layer, type: "costmap_2d::InflationLayer"}
#Configuration for the sensors that the costmap will use to update a map
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {data_type: LaserScan, sensor_frame: /laser, topic: /scan, expected_update_rate: 0.4,
observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 2.0, min_obstacle_height: 0.0, inf_is_valid: true}
sonar_layer:
topics: ['/US1']
no_readings_timeout: 1.0
clear_threshold: 0.2
mark_threshold: 0.8
clear_on_max_reading: true
Does anyone have any idea what is going wrong here? Please let me know if you need more information from me. Any help will be appreciated.
Thanks a lot.
Naman Kumar
↧
↧
Goal reached, incorrect map
Hello,
I am using `move_base` to make my robot move a certain distance (x,y) **relative to itself**. I am using the following code in order to send my goal to `move_base`:
goal.target_pose.header.frame_id = "/base_link";
goal.target_pose.header.stamp = ros::Time::now();
double theta; // target robot orientation
theta = 0;
goal.target_pose.pose.position.x = 1.0; //Send robot to goal +1m along x, and 0.3m along y, z remains the same
goal.target_pose.pose.position.y = 0.3;
goal.target_pose.pose.position.z = 0;
goal.target_pose.pose.orientation.x = 0.000;
goal.target_pose.pose.orientation.y = 0.000;
goal.target_pose.pose.orientation.z = sin(theta/2);
goal.target_pose.pose.orientation.w = cos(theta/2);
ROS_INFO("Sending goal");
ac.sendGoal(goal);
The issue is that in the physical world, the goal is being reached successfully, as I'm expecting it to be. So, I can see that the robot has moved by 1m forward (x-axis) and 0.3m towards its left (y-axis). The trajectory following is also perfect. My only problem is that the map being built at the same time (by `gmapping`) is totally incorrect and the orientation of the robot in this map is also a mess:

Now, according to my code, the robot should have moved by 1m forward, 0.3m towards the left and then the final orientation would be at 0 degrees to itself, i.e. it should remain facing forwards. Physically, I can see the robot reaching this goal appropriately and perfectly. But what I see in RVIZ is a completely different story. Here are the parameter files that I am using:
common_costmap_params.yaml:
map_type: costmap
transform_tolerance: 0.7
obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[-0.5, -0.33], [-0.5, 0.33], [0.5, 0.33], [0.5, -0.33]]
inflation_radius: 0.55
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: laser,
data_type: LaserScan,
topic: /scan,
marking: true,
clearing: true,
inf_is_valid: true}
global_costmap_params.yaml:
global_costmap:
global_frame: /odom
robot_base_frame: base_link
update_frequency: 3.0
publish_frequency: 10.0
static_map: true
#width: 30.0
#height: 30.0
#resolution: 0.025
#origin_x: 0.0
#origin_y: 0.0
plugins:
- {name: static, type: "costmap_2d::StaticLayer"}
- {name: inflation, type: "costmap_2d::InflationLayer"}
local_costmap_params.yaml:
local_costmap:
global_frame: /odom
robot_base_frame: base_link
update_frequency: 3.0
publish_frequency: 10.0
static_map: false
rolling_window: true
width: 10.0
height: 10.0
resolution: 0.05
origin_x: 0.0
origin_y: 0.0
plugins:
- {name: obstacles_laser, type: "costmap_2d::ObstacleLayer"}
- {name: inflation, type: "costmap_2d::InflationLayer"}
base_local_planner_params.yaml:
TrajectoryPlannerROS:
# Robot Configuration Parameters
acc_lim_x: 0.5
acc_lim_y: 0.5
acc_lim_theta: 2.0
max_vel_x: 0.2
min_vel_x: 0.02
max_rotational_vel: 1.0
min_in_place_rotational_vel: 0.4
escape_vel: -0.1
holonomic_robot: false
meter_scoring: true
# Goal Tolerance Parameters
xy_goal_tolerance: 0.1
yaw_goal_tolerance: 0.2
#new addition
latch_xy_goal_tolerance: false
# Forward Simulation Parameters
sim_time: 1.0
sim_granularity: 0.025
vx_samples: 3
vtheta_samples: 20
Here's the launch file for move_base:
I would greatly appreciate if anyone can help me figure out what's going on. I need my robot to be able **to navigate to some distance (x,y) relative to itself in a map that is being built concurrent to the robot navigation**.
Thanks.
RND.
↧
detecting and avoiding glass obstacles
Hi all,
I want to know what is the best way to detect glass obstacles while the robot is navigating in a room with a given map? I know laser and IR sensors wont work. The only option left is Sonar sensors. I heard somewhere that Kinect V2 can actually detect glass walls which to me looks really surprising, is it really true?
I know this is a very general question and technically, it has nothing to do with ROS but eventually I will be doing everything in ROS, so I thought this will be a good place for such a question.
Any help or opinion regarding it will be appreciated.
Thanks in advance.
Naman Kumar
↧
doubt regarding working of base_local_planner and oscillations about the global plan
Hi all,
I have a doubt regarding the working of the [base_local_planner](http://wiki.ros.org/base_local_planner), specifically Trajectory Rollout planner. According to the above link, the local planner samples in the robot's control space, performs forward simulation, evaluates different trajectories and finally picks the highest scoring trajectory, My question is **how does the local planner makes sure that the robot follows that exact trajectory**? Does it use a PID Controller or something like that? The problem I am having is that robot does not follow the global plan accurately but oscillates about the global plan. I have tried tuning different parameters and followed [http://answers.ros.org/question/73195/navigation-oscilation-on-local-planner/](http://answers.ros.org/question/73195/navigation-oscilation-on-local-planner/) which reduced the oscillations but its still not very good, the robot still oscillates about the global plan. So, I was wondering if it uses something like PID, how can I tune those parameters?
**Update 1**: One more thing, how does the base_local_planner uses odom (nav_msgs/Odometry) and gives a better estimate for /cmd_vel? There has to be some kind of a controller (maybe only P Control). Sorry If I am being really naive, but it would be great if someone can clear my doubt or point me in the right direction?
Any help will be appreciated.
Thanks a lot.
Naman Kumar
↧