You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(4) |
Feb
(6) |
Mar
(16) |
Apr
(9) |
May
(6) |
Jun
(2) |
Jul
|
Aug
(14) |
Sep
(10) |
Oct
(12) |
Nov
(15) |
Dec
(3) |
2012 |
Jan
(15) |
Feb
(9) |
Mar
(10) |
Apr
(19) |
May
(20) |
Jun
(14) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(2) |
2013 |
Jan
(13) |
Feb
(8) |
Mar
(11) |
Apr
(20) |
May
(11) |
Jun
(11) |
Jul
(12) |
Aug
(3) |
Sep
|
Oct
(7) |
Nov
(9) |
Dec
(1) |
2014 |
Jan
(1) |
Feb
(4) |
Mar
(13) |
Apr
(5) |
May
(10) |
Jun
(27) |
Jul
(17) |
Aug
|
Sep
(8) |
Oct
(12) |
Nov
(19) |
Dec
(31) |
2015 |
Jan
(21) |
Feb
(11) |
Mar
(15) |
Apr
(1) |
May
(15) |
Jun
(11) |
Jul
(23) |
Aug
(6) |
Sep
(28) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2016 |
Jan
(14) |
Feb
(42) |
Mar
(20) |
Apr
(10) |
May
(11) |
Jun
(3) |
Jul
(9) |
Aug
(6) |
Sep
(6) |
Oct
(8) |
Nov
|
Dec
(1) |
2017 |
Jan
(3) |
Feb
(1) |
Mar
(2) |
Apr
(6) |
May
(11) |
Jun
(5) |
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
(8) |
2018 |
Jan
(8) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(3) |
Jun
(8) |
Jul
(21) |
Aug
(7) |
Sep
(12) |
Oct
(4) |
Nov
(3) |
Dec
(2) |
2019 |
Jan
(9) |
Feb
(7) |
Mar
(5) |
Apr
(1) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(5) |
Nov
(2) |
Dec
(5) |
2020 |
Jan
(5) |
Feb
(5) |
Mar
(9) |
Apr
|
May
(2) |
Jun
(10) |
Jul
(7) |
Aug
(1) |
Sep
|
Oct
(3) |
Nov
|
Dec
(1) |
2021 |
Jan
(4) |
Feb
(5) |
Mar
(2) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(2) |
Aug
(13) |
Sep
(5) |
Oct
(5) |
Nov
(1) |
Dec
(1) |
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(1) |
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(2) |
Dec
(1) |
2025 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Lydia K. <ka...@ri...> - 2025-04-08 23:55:40
|
Hi all, We are happy to announce version 1.7.0 of the Open Motion Planning Library (OMPL) <http://ompl.kavrakilab.org/>. This release introduces several significant enhancements and changes aimed at improving functionality, compatibility, and user experience. Below is a summary of the main updates: New Planners and State Spaces: Effort Informed Trees (EIT*): A new planner designed to efficiently handle kinodynamic planning problems by leveraging an informed search strategy. Dubins State Spaces: Three new 3D extensions to the Dubins model have been added: Vana, Owens, and Vana-Owens state spaces. RRT-Rope: The rope short-cutting technique from RRT-Rope has been added to our PathSimplifier class. Removed Features: ODE and MORSE Extensions: Support for the ODE (Open Dynamics Engine) and MORSE robot simulator has been removed. Performance Improvements: Dubins State Space Optimization: Implemented Dubins Set Classification for faster distance computations in the Dubin State Space. System Requirements: C++17 Compliance: A C++17 compliant compiler is now required to build OMPL. CMake and Boost Versions: The minimum required version of CMake is now 3.5, and the minimum supported version of Boost is 1.68. NumPy Version: The minimum required version of NumPy for Python bindings is now 2.0. Build and Deployment: Docker Support: Build targets have been added for creating Docker images on Ubuntu 24.04 for OMPL, the PlannerArena web server, and the OMPL web app. These Docker images are available on Docker Hub <https://hub.docker.com/u/kavrakilab>. Python Wheels: Python wheels <https://github.com/ompl/ompl/releases> for OMPL are now available, simplifying the installation process for Python users. Contributions to this release were made by (in alphabetical order): Kyle Cesare, independent Gaël Écorchard, Czech Technical University in Prague Ryan Friedman, Aerovironment, Inc. Jonathan Gammell, Queen’s University Michael Görner, University of Hamburg Valentin Hartmann, University of Stuttgart Wolfgang Hönig, Technical University Berlin Taylan İşleyici, Middle East Technical University Zak Kingston, Purdue University Jaeyoung Lim, ETHZ Rhys Mainwaring, independent Mark Moll, Metron, Inc. Johnny Nunez, University of Barcelona Andreas Orthey, Realtime Robotics Louis Petit, University of Sherbrooke Yuri Rocha, MakinaRocks Marlin Strub, Gravis Robotics Wil Thomason, RAI Institute Jafar Uruç, Humanoid Download You can download OMPL and find more information about OMPL on its homepage at http://ompl.kavrakilab.org <http://ompl.kavrakilab.org/>. What is next? We have already started work on OMPL 2.0. Some features that will be available in our repository <https://github.com/ompl/ompl> soon include: Example integrations that show how OMPL can be used in combination with MuJoCo, Bullet, Pinocchio and other robotics frameworks. New Python bindings based on nanobind <https://github.com/wjakob/nanobind>. A revamped web page to showcase the broad range of projects where OMPL has been used. Our next big goal is to integrate the ideas of VAMP <https://www.kavrakilab.org/publications/thomason2024vamp.pdf>’s SIMD-accelerated planning with OMPL, which should result in orders of magnitude speed-up. Of course, we welcome contributions from others, too! With best wishes, Lydia Kavraki Lydia E. Kavraki, Ph.D. Kenneth and Audrey Kennedy Professor of Computing Professor of Computer Science Professor of Bioengineering Professor of Computer and Electrical Engineering Professor of Mechanical Engineering Director, Ken Kennedy Institute Rice University ka...@ri... <mailto:ka...@ri...> (713) 348-5737 https://profiles.rice.edu/faculty/lydia-e-kavraki http://www.kavrakilab.org <http://www.kavrakilab.org/> http://kenkennedy.rice.edu <http://kenkennedy.rice.edu/> |
From: Florian F. <de...@fl...> - 2025-03-21 15:21:53
|
Hi everyone, I'm trying to solve a path planning problem where there are obstacles and the path must be a Dubins' path. I came across the OMPL library which looks great, but I struggle to get reliable results. I suspect that I'm not using the library well. I come from the optimization community. I'm familiar with A* and its variants, but I'm not familiar with the sampling based algorithms. For now, what I've done looks like: ob::StateSpacePtr space(std::make_shared<ob::DubinsStateSpace>(turning_radius)); space->as<ob::SE2StateSpace>()->setBounds(bounds); og::SimpleSetup ss(space); const ob::SpaceInformation *si = ss.getSpaceInformation().get(); ss.setStateValidityChecker([obstacles, si](const ob::State *state) { return is_state_valid(obstacles, si, state); }); auto planner(std::make_shared<og::RRTstar>(ss.getSpaceInformation())); ss.setPlanner(planner); ss.setStartAndGoalStates(ompl_start, ompl_goal); ob::PlannerStatus solved = ss.solve(60); With this, I often get valid paths, but they look far from optimal. Is this the correct way to achieve what I'm looking for? Should I for example use a control based planner? I struggle to understand the difference with geometric planners Thank you for your help, Best, Florian |
From: Yu W. N. <ny...@mt...> - 2024-12-03 15:47:37
|
I hope you are all well. My name is Yu Wati Nyi, and I am an undergraduate student majoring in Computer Science and Economics at Mount Holyoke College. I am a senior and will be graduating in May 2025. I am currently working as an undergraduate researcher for a new robotics lab at my college and we will be researching guided motion planning with annotated skeletons. I am currently setting up the technology for future research with OMPL. I was wondering if anyone here would be willing to mentor me on developing a topological skeleton-based multi-robot path planning system. Also, please do reach out if this sounds like an exciting project to you. I would love to expand my network and learn from people in this field! Best, Yu Wati Nyi Computer Science and Economics Major Mount Holyoke College '25 |
From: Lydia K. <ka...@ri...> - 2024-11-25 19:18:07
|
Rice University Software Engineer in Robotics The Kavraki group (http://www.kavrakilab.org) seeks candidates to fill one position for a software engineer. The group develops novel computational methodologies for enabling robots to work with and support people. We work on algorithms for motion planning for high-dimensional systems with kinematic and dynamic constraints, integrated frameworks for reasoning under sensing and control uncertainty, novel methods for learning and using experiences, and ways to instruct robots at a high level and collaborate with them. Our research is inspired by various applications, from robots that will assist people in their homes to robots that will build space habitats. The successful candidate will contribute to our efforts to build and maintain open-source software for motion planning in robotics. In the past, the group and its collaborators developed the Open Motion Planning Library (OMPL), https://ompl.kavrakilab.org, which enjoys widespread popularity. The library will be extended and modernized; it will also be expanded for applications outside robotics (e.g., in computational structural biology). Other responsibilities include maintaining open-source software and web servers deployed by the lab. The candidate will be welcome to participate in the group's research efforts, depending on their interests. BACKGROUND: The successful candidate must have an MA or MSc or PhD in Computer Science, Electrical Engineering, Mechanical Engineering, or a related field. A degree with an emphasis on robotics is desirable. Solid software engineering skills are needed for this position. Extensive experience developing in C++ is required, while experience with Python or Rust is desirable. Prior experience in creating and maintaining open-source software is a plus. The position is flexible depending on the qualifications of the applicants (in particular, candidates with a PhD can be considered for a combined postdoctoral position). The position is for one year and is renewable upon satisfactory progress. HOW TO APPLY: Interested applicants should contact Professor Lydia Kavraki (ka...@ri...) and provide (a) a CV, (b) a detailed description of software engineering skills, (c) a detailed list of prior projects and, (d) a one-paragraph statement about their interest in the advertised position. The position is available immediately, and applications will be accepted until this position is filled. ABOUT RICE UNIVERSITY: Rice University is a private, comprehensive research university in the heart of Houston (recently ranked the most diverse city in America), adjacent to the Museum District and Texas Medical Center. It offers undergraduate and graduate degrees across eight schools and has a student body of approximately 8,000 undergraduate and graduate students. Rice consistently ranks among the top 20 national universities and the top 10 in undergraduate teaching (U.S. News & World Report); its endowment ranks among the top 20 US universities. The George R. Brown School of Engineering ranks among the top 20 undergraduate engineering programs and is strongly committed to nurturing the aspirations of faculty, staff, and students in an inclusive environment, with rankings of #8 for percent URM faculty, #14 for female faculty, and #4 for underrepresented undergraduate students among AAU institutions. Rice University is an Equal Opportunity Employer committed to diversity at all levels and considers for employment qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national or ethnic origin, genetic information, disability, or protected veteran status. Lydia E. Kavraki, Ph.D. Kenneth and Audrey Kennedy Professor of Computing Professor of Computer Science Professor of Bioengineering Professor of Computer and Electrical Engineering Professor of Mechanical Engineering Director, Ken Kennedy Institute Rice University ka...@ri... <mailto:ka...@ri...> (713) 348-5737 https://profiles.rice.edu/faculty/lydia-e-kavraki http://www.kavrakilab.org <http://www.kavrakilab.org/> http://kenkennedy.rice.edu <http://kenkennedy.rice.edu/> |
From: Schulz, P. <phi...@st...> - 2024-11-15 15:47:15
|
Dear OMPL Team, I am currently working on my master thesis and plan to integrate sampling-based algorithms from the powerful OMPL library into our local Python repository (https://github.com/WBK-Robotics/pybullet_industrial). The objective is to create collision-free paths for 6-axis industrial robots between given configurations. I encountered issues building the Python bindings and have created a GitHub issue for this: https://github.com/ompl/ompl/issues/1196. I would greatly appreciate any assistance you can provide on this matter. Thank you very much for your support! Best regards, Philipp |
From: Lydia K. <ka...@ri...> - 2024-09-08 23:26:37
|
Are you interested in helping with our efforts to maintain and augment OMPL? Please check: https://www.kavrakilab.org/openings/manager_swe.html Best to all, Lydia Kavraki Lydia E. Kavraki, Ph.D. Noah Harding Professor of Computer Science Professor of Bioengineering Professor of Computer and Electrical Engineering Professor of Mechanical Engineering Director, Ken Kennedy Institute Rice University ka...@ri... <mailto:ka...@ri...> (713) 348-5737 https://profiles.rice.edu/faculty/lydia-e-kavraki http://www.kavrakilab.org <http://www.kavrakilab.org/> http://kenkennedy.rice.edu <http://kenkennedy.rice.edu/> |
From: hadar s. <had...@gm...> - 2023-07-03 11:51:28
|
Hi, what's the difference between: the exception thrown by PRM::constructSolution : "Could not find solution path " and: the enum base::PlannerStatus::TIMEOUT which means (according to the documentation) "The planner failed to find a solution" ? Thanks! |
From: hadar s. <had...@gm...> - 2023-05-08 10:26:38
|
Hello, I’m using the lazy prm for a single query. First, lazy prm is initialized with a graph previously generated with prm, but with a different IsValid function (which considers newly added obstacles). Planning with lazy prm planner works great, but sometimes during the run, one of the vertices chosen to be removed from nearest neighbor (nn) data structure happens to be a pivot, which leads to rebuilding nn data structure and substantially increases run time. Is there a way to avoid rebuilding? As I mentioned, the instance of lazy prm is used once, only for a single query, and no additional queries are sent. Best, Hadar. |
From: hadar s. <had...@gm...> - 2022-11-21 11:31:23
|
Hi, What is the persistent distance map that is mentioned in LazyPRM::constructSolution function and how can I use it in the LazyPRM planner? Best, Hadar. |
From: Aditya G. <ag...@vt...> - 2022-09-30 20:57:24
|
Here is one more update, and possible solution. I looked through RRTConnect code, and think that the if condition on line 240 of RRTConnect.cpp that is used to sample goal states is what is causing failure to sample all goal states. `if (tGoal_->size() == 0 || pis_.getSampledGoalsCount() < tGoal_->size() / 2)` [RRTConnect.cpp]( https://github.com/ompl/ompl/blob/9c3a20faaddfcd7f58fce235495f043ebee3e735/src/ompl/geometric/planners/rrt/src/RRTConnect.cpp ) That if condition means that unless the very first sampled goal state can be connected to some other state, no other goal states will be sampled. In the example I am working with, the first goal state is sitting extremely close to a large obstacle. Thus the likelihood of sampling a state between this goal state and the obstacle that can be connected to the the goal state is extremely low. As such, a tree cannot grow from the first goal state and no other goal states get sampled, and thus no solution is generated. I changed the if condition to the if condition used in RRTstar and that made RRTConnect work. This is how my change looks like `if (pis_.getSampledGoalsCount() < goal->maxSampleCount()) {` I would appreciate any input on this possible solution. Thank you for your help! On Sat, Sep 24, 2022 at 1:29 PM Aditya Gadre <ag...@vt...> wrote: > I am providing more details about the problem I am having with multiple > goal states. Following is the output that shows that simple setup indeed > has three goal states. As I had mentioned in my previous email, > samplePosition_ in GoalStates goes from 0 to 1 and then remains at 1. The > code used is attached. > > ====== PROGRAM OUTPUT START ====== > 3 goal states, threshold = 2.22045e-16, memory address = 0x7ff58d706e08 > Compound state [ > RealVectorState [50 50] > SO2State [0] > ] > > Compound state [ > RealVectorState [50 50] > SO2State [1.0472] > ] > > Compound state [ > RealVectorState [50 50] > SO2State [1.5708] > ] > > Debug: RRTConnect: Planner range detected to be 28.598431 > Info: RRTConnect: Starting planning with 1 states already in > datastructure > Info: RRTConnect: Created 9347 states (9346 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.005351 seconds > Here is the code I used for testing planning with multiple goal states. > ====== PROGRAM OUTPUT End ====== > > Thank you for your help! > > > On Fri, Sep 23, 2022 at 11:45 PM Aditya Gadre <ag...@vt...> wrote: > >> Hello, >> >> I am relatively new to OMPL. I am experimenting with planning paths using >> RRTconnect in Dubin's state space, and have been able to plan paths from a >> given starting state to the goal in the presence of obstacles. Now I am >> trying to use the GoalStates class to specify multiple goals, but somehow >> it is not working as expected. >> >> State-space: Dubin's >> Planner: RRTconnect >> Start: 1 start state >> Goal: 3 goal states in GoalStates >> >> This is how I have set up the problem. The first goal state is (x1, y1, >> yaw1), and there is an obstacle very near to (x1, y1) position such that >> approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and >> (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, >> but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that >> it is possible to compute paths from the start state to (x1, y1, yaw2) and >> (x1, y1, yaw3). I use the GoalStates object to add the three goal states. >> All the setup is done using SimpleSetup. >> >> auto goals = std::make_shared<ob::GoalStates>(si); >> goals->addState(goal_state); // infeasible goal state >> goals->addState(goal_state1); // feasible >> goals->addState(goal_state2); // feasible >> simple_setup.setGoal(goals); >> >> I thought that the RRTconnect planner would sample all the three goal >> states and would be able to plan a path to one of the two feasible goal >> states. However, it seems that only the first goal state was ever sampled. >> In the debugger I checked the goal details in the SimpleSetup object. It >> showed three goal states added, but the samplePosition_ remained unchanged >> at 1, indicating that only the first goal state was sampled. The console >> output also shows that there ever was one goal state used in planning, >> which leads to the planner not being able to compute a path as the yaw >> angle of the first goal state makes any approach to its x-y possible >> impossible. In the console output below, we can see the tree growing only >> from the start state but not from the goal as only the infeasible goal >> state is sampled. The type of solution after each iteration remains >> APPROXIMATE solution. >> >> Info: RRTConnect: Starting planning with 1 states already in >> datastructure >> Info: RRTConnect: Created 15246 states (15245 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.010254 seconds >> >> Info: RRTConnect: Starting planning with 15246 states already in >> datastructure >> Info: RRTConnect: Created 26487 states (26486 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.009778 seconds >> >> Info: RRTConnect: Starting planning with 26487 states already in >> datastructure >> Info: RRTConnect: Created 36780 states (36779 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.007346 seconds >> >> I am most likely missing something important in the setup, but cannot >> really determine what it is. I will greatly appreciate any help in this >> regard. >> >> Thanks! >> >> |
From: Aditya G. <ag...@vt...> - 2022-09-24 17:30:36
|
I am providing more details about the problem I am having with multiple goal states. Following is the output that shows that simple setup indeed has three goal states. As I had mentioned in my previous email, samplePosition_ in GoalStates goes from 0 to 1 and then remains at 1. The code used is attached. ====== PROGRAM OUTPUT START ====== 3 goal states, threshold = 2.22045e-16, memory address = 0x7ff58d706e08 Compound state [ RealVectorState [50 50] SO2State [0] ] Compound state [ RealVectorState [50 50] SO2State [1.0472] ] Compound state [ RealVectorState [50 50] SO2State [1.5708] ] Debug: RRTConnect: Planner range detected to be 28.598431 Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 9347 states (9346 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.005351 seconds Here is the code I used for testing planning with multiple goal states. ====== PROGRAM OUTPUT End ====== Thank you for your help! On Fri, Sep 23, 2022 at 11:45 PM Aditya Gadre <ag...@vt...> wrote: > Hello, > > I am relatively new to OMPL. I am experimenting with planning paths using > RRTconnect in Dubin's state space, and have been able to plan paths from a > given starting state to the goal in the presence of obstacles. Now I am > trying to use the GoalStates class to specify multiple goals, but somehow > it is not working as expected. > > State-space: Dubin's > Planner: RRTconnect > Start: 1 start state > Goal: 3 goal states in GoalStates > > This is how I have set up the problem. The first goal state is (x1, y1, > yaw1), and there is an obstacle very near to (x1, y1) position such that > approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and > (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, > but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that > it is possible to compute paths from the start state to (x1, y1, yaw2) and > (x1, y1, yaw3). I use the GoalStates object to add the three goal states. > All the setup is done using SimpleSetup. > > auto goals = std::make_shared<ob::GoalStates>(si); > goals->addState(goal_state); // infeasible goal state > goals->addState(goal_state1); // feasible > goals->addState(goal_state2); // feasible > simple_setup.setGoal(goals); > > I thought that the RRTconnect planner would sample all the three goal > states and would be able to plan a path to one of the two feasible goal > states. However, it seems that only the first goal state was ever sampled. > In the debugger I checked the goal details in the SimpleSetup object. It > showed three goal states added, but the samplePosition_ remained unchanged > at 1, indicating that only the first goal state was sampled. The console > output also shows that there ever was one goal state used in planning, > which leads to the planner not being able to compute a path as the yaw > angle of the first goal state makes any approach to its x-y possible > impossible. In the console output below, we can see the tree growing only > from the start state but not from the goal as only the infeasible goal > state is sampled. The type of solution after each iteration remains > APPROXIMATE solution. > > Info: RRTConnect: Starting planning with 1 states already in > datastructure > Info: RRTConnect: Created 15246 states (15245 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.010254 seconds > > Info: RRTConnect: Starting planning with 15246 states already in > datastructure > Info: RRTConnect: Created 26487 states (26486 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.009778 seconds > > Info: RRTConnect: Starting planning with 26487 states already in > datastructure > Info: RRTConnect: Created 36780 states (36779 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.007346 seconds > > I am most likely missing something important in the setup, but cannot > really determine what it is. I will greatly appreciate any help in this > regard. > > Thanks! > > |
From: Aditya G. <ag...@vt...> - 2022-09-24 04:15:26
|
Hello, I am relatively new to OMPL. I am experimenting with planning paths using RRTconnect in Dubin's state space, and have been able to plan paths from a given starting state to the goal in the presence of obstacles. Now I am trying to use the GoalStates class to specify multiple goals, but somehow it is not working as expected. State-space: Dubin's Planner: RRTconnect Start: 1 start state Goal: 3 goal states in GoalStates This is how I have set up the problem. The first goal state is (x1, y1, yaw1), and there is an obstacle very near to (x1, y1) position such that approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that it is possible to compute paths from the start state to (x1, y1, yaw2) and (x1, y1, yaw3). I use the GoalStates object to add the three goal states. All the setup is done using SimpleSetup. auto goals = std::make_shared<ob::GoalStates>(si); goals->addState(goal_state); // infeasible goal state goals->addState(goal_state1); // feasible goals->addState(goal_state2); // feasible simple_setup.setGoal(goals); I thought that the RRTconnect planner would sample all the three goal states and would be able to plan a path to one of the two feasible goal states. However, it seems that only the first goal state was ever sampled. In the debugger I checked the goal details in the SimpleSetup object. It showed three goal states added, but the samplePosition_ remained unchanged at 1, indicating that only the first goal state was sampled. The console output also shows that there ever was one goal state used in planning, which leads to the planner not being able to compute a path as the yaw angle of the first goal state makes any approach to its x-y possible impossible. In the console output below, we can see the tree growing only from the start state but not from the goal as only the infeasible goal state is sampled. The type of solution after each iteration remains APPROXIMATE solution. Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 15246 states (15245 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.010254 seconds Info: RRTConnect: Starting planning with 15246 states already in datastructure Info: RRTConnect: Created 26487 states (26486 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.009778 seconds Info: RRTConnect: Starting planning with 26487 states already in datastructure Info: RRTConnect: Created 36780 states (36779 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.007346 seconds I am most likely missing something important in the setup, but cannot really determine what it is. I will greatly appreciate any help in this regard. Thanks! |
From: Aditya G. <ag...@vt...> - 2022-09-24 04:04:28
|
I am relatively new to OMPL. I am experimenting with planning paths using RRTconnect in Dubin's state space, and have been able to plan paths from given starting state to the goal in presence of obstacles. Now I am trying to use GoalStates class to specify multiple goals, but somehow it is not working as expected. State-space: Dubin's Planner: RRTconnect Start: 1 start state Goal: 3 goal states in GoalStates This is how I have set up the problem. The first goal state is (x1, y1, yaw1), and there is an obstacle very near to (x1, y1) position such that approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that it is possible to compute paths from the start state to (x1, y1, yaw2) and (x1, y1, yaw3). I use GoalStates object to add the three goal states. All the setup is done using SimpleSetup. auto goals = std::make_shared<ob::GoalStates>(si); goals->addState(goal_state); // infeasible goal state goals->addState(goal_state1); // feasible goals->addState(goal_state2); // feasible simple_setup.setGoal(goals); I thought that the RRTconnect planner would sample all the three goal states and will be able to plan path to one of the two feasible goal states. However, it seems that only the first goal state was ever sampled. In the debugger I checked the goal details in the SimpleSetup object. It showed three goal states added, but the samplePosition_ remained unchanged at 1, indicating that only the first goal state was sampled. The console output also shows that there ever was one goal state used in planning, which leads to the planner not being able to compute a path as the yaw angle of the first goal state makes any approach the its x-y possible impossible. In the console output below, we can see tree growning only from start state but not from the goal as only the infeasible goal state is sampled. The type of solution after each itaration remains APPROXIMATE soluton. Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 15246 states (15245 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.010254 seconds Info: RRTConnect: Starting planning with 15246 states already in datastructure Info: RRTConnect: Created 26487 states (26486 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.009778 seconds Info: RRTConnect: Starting planning with 26487 states already in datastructure Info: RRTConnect: Created 36780 states (36779 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.007346 seconds I mostly likely missing something important in the setup, but cannot really determine what it is. I will greatly appreciate any help in thie regard. Thanks! |
From: Olzhas A. <olz...@gm...> - 2022-05-21 01:58:54
|
Hello everyone! How can I extend this enum? https://github.com/ompl/ompl/blob/main/src/ompl/base/StateSpaceTypes.h#L45 Kind regards, Olzhas Adiyatov email: olz...@gm... website: olzhas.github.io |
From: Adam F. <afi...@cs...> - 2022-01-12 10:37:36
|
Hi all, I posted about this on GitHub as well—not sure what the right place is. I am using OMPL version 1.5.2 through Python with Pybullet as the collision checker. Specifically, I am planning for a Franka arm. I pretty frequently find that when I plan for a problem—especially a difficult problem—the final state in the plan is very far off from the goal I gave the planner. I’ve tested this with the simple RRT + simple setup and have been running into the issue frequently. Is there some parameter to specify the acceptable region around the goal? I posted an example of the code I use in the GitHub issue here: https://github.com/ompl/ompl/issues/866 Hopefully there’s some easy fix on my end and I’m just doing something ill-advised. Thanks for your help, Adam -- Adam Fishman He/Him fishy.ai |
From: Mark M. <mm...@ri...> - 2021-12-01 22:25:57
|
Hi Olzhas, These constants were empirically chosen to make the Dubins state space pass the space test unit tests and sanity checks. We have not spent significant time trying to minimize them as much as possible, let alone derive analytic bounds. If you know they can be made much smaller and still pass the unit tests, please submit a pull request with the desired change. Best, Mark > On Nov 30, 2021, at 8:20 AM, Olzhas Adiyatov <olz...@gm...> wrote: > > Hello everyone! > > How were these two constants determined? > > https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48 <https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48> > https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49 <https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49> > > Kind regards, > Olzhas > _______________________________________________ > ompl-users mailing list > omp...@li... <mailto:omp...@li...> > https://lists.sourceforge.net/lists/listinfo/ompl-users <https://lists.sourceforge.net/lists/listinfo/ompl-users> |
From: Olzhas A. <olz...@gm...> - 2021-11-30 16:20:37
|
Hello everyone! How were these two constants determined? https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48 https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49 Kind regards, Olzhas |
From: Kejia R. <kr...@ri...> - 2021-10-06 10:26:18
|
Hello Mark, Thank you for your response and suggestions. I am currently trying to implement my new planner class inheriting from the "RRT" class in python. When I override the main "solve" method, I still want to use the "RRT::Motion" class inside it. For example, in the main "solve" method of my own planner, I want to do something like below: class MyPlanner(oc.RRT): ... def solve(self, time_limit): ... motion = oc.RRT.Motion(self.si_) ... However, in python, I cannot find this "RRT::Motion" class and cannot get access to it. It looks like this class is defined as a protected inner class under RRT in cpp, and thus I am not sure whether the python bindings have generated correspondences even for these protected classes. So I am sending the email to ask for your help. Could you please help provide some suggestions on how I can access the "RRT::Motion" class or any other protected attributes/methods in python? Thank you! Best, Kejia On Sun, Sep 19, 2021 at 10:31 PM Mark Moll <mm...@ri...> wrote: > Hi Keija, > > The nearest neighbors data structure may not be a tree and even if it is, > its tree and the tree constructed by the RRT algorithm are not the same. > There is an implicit tree structure in RRT defined using the > RRT::Motion::parent pointers. You could need to do some additional > bookkeeping (possibly by extending the RRT::Motion class) to keep track of > a Motion instance’s depth and to loop over the leaves. > > Best, > > Mark > > > > On Sep 19, 2021, at 7:49 PM, Kejia Ren <kr...@ri...> wrote: > > Hello, > > We want to inherit from the RRT planner in OMPL to implement our own > planner. Basically, I think we just need to overwrite the main solve() > function since all other steps are the same. What we want to do inside the > solve() function is, we want to retrieve the depth of the tree structure (I > think it is an instance of the NearestNeighbors class inside RRT) and also > want to decide where the leaf nodes are and evaluate the state of each leaf > node. Below are a few lines of psuedocodes explaining what we want to do > with it. But we are not sure whether we have access to the tree depth and > whether the data structure used in RRT can decide where the leaf nodes are. > So we are sending this email to ask for help. Could you please give some > instructions or advice for how to do that? Thank you so much! > > solve(): > ... > if tree.depth > 10: > for n in leaf_nodes: > evaluate(n -> state) > ... > > Best, > Kejia > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > |
From: Kejia R. <kr...@ri...> - 2021-10-05 02:43:45
|
Hello, Below are some of my codes for setting up the planner. I created an instance called "ss" for the "SimpleSetup" class, and below is how I set the control sampling for the planner: space_info = ss.getSpaceInformation() space_info.setPropagationStepSize(0.5) space_info.setMinMaxControlDuration(1, 5) planner = oc.RRT(self.space_info) ss.setPlanner(planner) And inside my "propagate" function, I just tried to print out the duration at the very beginning as below, and found out the duration always takes the same value whenever the function is called: def propagate(near, controls, duration, state): print(duration) ... Most of the time the planner can successfully solve for a feasible path. So I think everything else looks fine, except for this control duration inconsistency issue as I mentioned before. Hope anyone can help figure out what could be wrong. Thanks! Best, Kejia On Fri, Oct 1, 2021 at 2:09 AM Kejia Ren <kr...@ri...> wrote: > Hello, > > We are using the RRT planner with controls in python. I just found out a > problem that, even if I already set the control duration to have different > values for min and max, the "duration" input to the "propagate" function > always takes the same value. Specifically, I set the space information of > my planner with "setPropagationStepSize(0.5)" and > "setMinMaxControlDuration(1, 5)", so based on my understanding, the control > duration should be sampled with five possible values {0.5, 1.0, 1.5, 2.0, > 2.5}. However, inside my "propagate" function I found the input for > "duration" always takes the smallest value which is 0.5. And after the plan > is solved, in the solution path the durations take different values. Right > now this weird problem is causing significant inconsistency between the > planned path and its actual execution. I have no idea where the problem > could come from, and don't know how to solve it since we are not going to > modify anything inside the main "solve" function. Hope anybody could help > provide any clues based on my description. Thank you! > > Best, > Kejia > |
From: Mark M. <mm...@ri...> - 2021-10-05 01:29:10
|
Hi Justin, If you know the pose and shape of the obstacles at any given time, then you can create a CompoundStateSpace with a TimeStateSpace as one of the components and the current StateSpace as the other component. In other words, you’d explicitly keep track of time as part of the state of the robot. You will probably have to create a custom motion validator that will ensure that the robot can only move forward in time (and be collision-free with obstacles). You may also need additional checks to make sure the robot’s speed doesn’t exceed any limits. Best, Mark > On Oct 3, 2021, at 11:48 PM, Justin Kottinger <juk...@co...> wrote: > > Hello, > > I am attempting to do rigid body planning with controls and time-dependent obstacles. To this end, I have implemented a state validity checker class that inherits the ob::StateValidityChecker class but also tracks when the obstacles are present. I am running into an issue where time might not be tracked within the ob::State object and no ob::StateValidityChecker member functions deal with time (to the best of my knowledge). Does anyone know of some clever ways overcome this challenge? > > Thank you in advance for your time, > > Justin |
From: Justin K. <juk...@co...> - 2021-10-04 07:20:36
|
Hello, I am attempting to do rigid body planning with controls and time-dependent obstacles. To this end, I have implemented a state validity checker class that inherits the ob::StateValidityChecker class but also tracks when the obstacles are present. I am running into an issue where time might not be tracked within the ob::State object and no ob::StateValidityChecker member functions deal with time (to the best of my knowledge). Does anyone know of some clever ways overcome this challenge? Thank you in advance for your time, Justin |
From: Kejia R. <kr...@ri...> - 2021-10-01 07:09:49
|
Hello, We are using the RRT planner with controls in python. I just found out a problem that, even if I already set the control duration to have different values for min and max, the "duration" input to the "propagate" function always takes the same value. Specifically, I set the space information of my planner with "setPropagationStepSize(0.5)" and "setMinMaxControlDuration(1, 5)", so based on my understanding, the control duration should be sampled with five possible values {0.5, 1.0, 1.5, 2.0, 2.5}. However, inside my "propagate" function I found the input for "duration" always takes the smallest value which is 0.5. And after the plan is solved, in the solution path the durations take different values. Right now this weird problem is causing significant inconsistency between the planned path and its actual execution. I have no idea where the problem could come from, and don't know how to solve it since we are not going to modify anything inside the main "solve" function. Hope anybody could help provide any clues based on my description. Thank you! Best, Kejia |
From: Mark M. <mm...@ri...> - 2021-09-20 03:31:46
|
Hi Keija, The nearest neighbors data structure may not be a tree and even if it is, its tree and the tree constructed by the RRT algorithm are not the same. There is an implicit tree structure in RRT defined using the RRT::Motion::parent pointers. You could need to do some additional bookkeeping (possibly by extending the RRT::Motion class) to keep track of a Motion instance’s depth and to loop over the leaves. Best, Mark > On Sep 19, 2021, at 7:49 PM, Kejia Ren <kr...@ri...> wrote: > > Hello, > > We want to inherit from the RRT planner in OMPL to implement our own planner. Basically, I think we just need to overwrite the main solve() function since all other steps are the same. What we want to do inside the solve() function is, we want to retrieve the depth of the tree structure (I think it is an instance of the NearestNeighbors class inside RRT) and also want to decide where the leaf nodes are and evaluate the state of each leaf node. Below are a few lines of psuedocodes explaining what we want to do with it. But we are not sure whether we have access to the tree depth and whether the data structure used in RRT can decide where the leaf nodes are. So we are sending this email to ask for help. Could you please give some instructions or advice for how to do that? Thank you so much! > > solve(): > ... > if tree.depth > 10: > for n in leaf_nodes: > evaluate(n -> state) > ... > > Best, > Kejia > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Kejia R. <kr...@ri...> - 2021-09-20 03:18:14
|
Hello, We want to inherit from the RRT planner in OMPL to implement our own planner. Basically, I think we just need to overwrite the main solve() function since all other steps are the same. What we want to do inside the solve() function is, we want to retrieve the depth of the tree structure (I think it is an instance of the NearestNeighbors class inside RRT) and also want to decide where the leaf nodes are and evaluate the state of each leaf node. Below are a few lines of psuedocodes explaining what we want to do with it. But we are not sure whether we have access to the tree depth and whether the data structure used in RRT can decide where the leaf nodes are. So we are sending this email to ask for help. Could you please give some instructions or advice for how to do that? Thank you so much! solve(): ... if tree.depth > 10: for n in leaf_nodes: evaluate(n -> state) ... Best, Kejia |
From: Emre C. G. (S. <emr...@sa...> - 2021-09-03 23:05:22
|
Hello, How about goal sampling on a separate thread using threading module of Python, where sampled goal states are added to an ob.GoalStates class? I think it is working. Can you check the following demo that I modified from RigidBodyPlannning? *My Code:* import threading import time import numpy as np from ompl import base as ob from ompl import geometric as og def isStateValid(state): if 0.2 <= state[0] <= 0.8 and 0.2 <= state[1] <= 0.8: return False else: return True def sampleFunc(): t = 1000 * time.time() np.random.seed(int(t) % 2 ** 32) random_state = np.random.uniform(low=np.array([0.75, 0.75]), high=np.array([0.95, 0.95]), size=(2,)) if isStateValid(random_state): return random_state else: return None def add_goal(stop, ss): while True: sgoal = ob.State(ss) random_state = sampleFunc() if random_state is not None: for i in range(2): sgoal[i] = random_state[i] goal.addState(sgoal) if stop(): break if __name__ == '__main__': # create state space and add bounds space = ob.RealVectorStateSpace(2) bounds = ob.RealVectorBounds(2) bounds.setLow(0.) bounds.setHigh(1.) space.setBounds(bounds) # set state validity checker ss = og.SimpleSetup(space) ss.setStateValidityChecker(ob.StateValidityCheckerFn(isStateValid)) # define start state sstart = ob.State(space) for i in range(2): sstart[i] = 0. # define goal as global variable global goal # goal as GoalStates goal = ob.GoalStates(ss.getSpaceInformation()) # Initially no goal states in goal print("Initial Number of Goal States: ", goal.getStateCount()) # define goal thread and start stop_thread = False goal_thread = threading.Thread(target=add_goal, args=(lambda: stop_thread, space,)) goal_thread.start() # define and set an optimum planner to see if it finds different goals planner = og.RRTstar(ss.getSpaceInformation()) planner.setProblemDefinition(ss.getProblemDefinition()) ss.setPlanner(planner) # set start state and the goal ss.setStartState(sstart) ss.setGoal(goal) # check how many goal states are found before planning print("Number of Goal States: ", goal.getStateCount()) # start planning solved = ss.solve(10.0) if solved: # stop thread stop_thread = True goal_thread.join() # total number of states print("Final Number of Goal States: ", goal.getStateCount()) # print the simplified path print(ss.getSolutionPath()) *The output:* Initial Number of Goal States: 0 Number of Goal States: 68 Debug: RRTstar: Planner range detected to be 0.282843 Info: RRTstar: No optimization objective specified. Defaulting to optimizing path length for the allowed planning time. Info: RRTstar: Started planning with 1 states. Seeking a solution better than 0.00000. Info: RRTstar: Initial k-nearest value of 25 Info: RRTstar: Found an initial solution with a cost of 1.61 in 66 iterations (24 vertices in the graph) Info: RRTstar: Created 708 new states. Checked 130827 rewire options. 52 goal states in tree. Final solution cost 1.460 Info: Solution found in 10.038575 seconds Final Number of Goal States: 93746 Geometric path with 8 states RealVectorState [0 0] RealVectorState [0.188465 0.0446338] RealVectorState [0.463898 0.105294] RealVectorState [0.578831 0.131403] RealVectorState [0.808084 0.189118] RealVectorState [0.866363 0.437726] RealVectorState [0.896772 0.648098] RealVectorState [0.921206 0.808713] On Mon, Aug 30, 2021 at 6:55 PM Mark Moll <mm...@ri...> wrote: > Hi Emre, > > This may be difficult to do. GoalLazySamples spawns a separate thread to > generate valid goal samples. If this thread also calls Python code (e.g., > if your StateValidityChecker is written in Python), then you run into > trouble with Python’s Global Interpreter Lock (GIL). This is a limitation > of the current bindings. > > Best, > > Mark > > > > On Aug 29, 2021, at 11:56 AM, Emre Cemal Gönen (Student) < > emr...@sa...> wrote: > > <image.gif>Hello, > > Can you provide an example of how to define GoalLazySamples using Python > Bindings? > > I define it in the following way, but I get an error of > > terminate called after throwing an instance of > 'boost::python::error_already_set' > Aborted (core dumped) > > > My GoalLazySamples definition: > > class GoalLRegion(ob.GoalLazySamples): > def __init__(self, si, samplerFunc, goalRegionParam): > super(GoalLRegion, self).__init__(si, samplerFunc) > self.mySampler = ob.ValidStateSampler(si) > self.goalRegionParam = goalRegionParam > > def distanceGoal(self, state): > d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ > self.goalRegionParam[2] > return d > > def sampleGoal(self, state): > self.mySampler.sample(state) > > def maxSampleCount(self): > return 100 > > def mySamplingFunction(self, gls, state): > return True > > > goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) > > > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > ------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > <http://www.sabanciuniv.edu/> > > <image.gif> > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |