You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(4) |
Feb
(6) |
Mar
(16) |
Apr
(9) |
May
(6) |
Jun
(2) |
Jul
|
Aug
(14) |
Sep
(10) |
Oct
(12) |
Nov
(15) |
Dec
(3) |
2012 |
Jan
(15) |
Feb
(9) |
Mar
(10) |
Apr
(19) |
May
(20) |
Jun
(14) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(2) |
2013 |
Jan
(13) |
Feb
(8) |
Mar
(11) |
Apr
(20) |
May
(11) |
Jun
(11) |
Jul
(12) |
Aug
(3) |
Sep
|
Oct
(7) |
Nov
(9) |
Dec
(1) |
2014 |
Jan
(1) |
Feb
(4) |
Mar
(13) |
Apr
(5) |
May
(10) |
Jun
(27) |
Jul
(17) |
Aug
|
Sep
(8) |
Oct
(12) |
Nov
(19) |
Dec
(31) |
2015 |
Jan
(21) |
Feb
(11) |
Mar
(15) |
Apr
(1) |
May
(15) |
Jun
(11) |
Jul
(23) |
Aug
(6) |
Sep
(28) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2016 |
Jan
(14) |
Feb
(42) |
Mar
(20) |
Apr
(10) |
May
(11) |
Jun
(3) |
Jul
(9) |
Aug
(6) |
Sep
(6) |
Oct
(8) |
Nov
|
Dec
(1) |
2017 |
Jan
(3) |
Feb
(1) |
Mar
(2) |
Apr
(6) |
May
(11) |
Jun
(5) |
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
(8) |
2018 |
Jan
(8) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(3) |
Jun
(8) |
Jul
(21) |
Aug
(7) |
Sep
(12) |
Oct
(4) |
Nov
(3) |
Dec
(2) |
2019 |
Jan
(9) |
Feb
(7) |
Mar
(5) |
Apr
(1) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(5) |
Nov
(2) |
Dec
(5) |
2020 |
Jan
(5) |
Feb
(5) |
Mar
(9) |
Apr
|
May
(2) |
Jun
(10) |
Jul
(7) |
Aug
(1) |
Sep
|
Oct
(3) |
Nov
|
Dec
(1) |
2021 |
Jan
(4) |
Feb
(5) |
Mar
(2) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(2) |
Aug
(13) |
Sep
(5) |
Oct
(5) |
Nov
(1) |
Dec
(1) |
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(1) |
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Lydia K. <ka...@ri...> - 2024-09-08 23:26:37
|
Are you interested in helping with our efforts to maintain and augment OMPL? Please check: https://www.kavrakilab.org/openings/manager_swe.html Best to all, Lydia Kavraki Lydia E. Kavraki, Ph.D. Noah Harding Professor of Computer Science Professor of Bioengineering Professor of Computer and Electrical Engineering Professor of Mechanical Engineering Director, Ken Kennedy Institute Rice University ka...@ri... <mailto:ka...@ri...> (713) 348-5737 https://profiles.rice.edu/faculty/lydia-e-kavraki http://www.kavrakilab.org <http://www.kavrakilab.org/> http://kenkennedy.rice.edu <http://kenkennedy.rice.edu/> |
From: hadar s. <had...@gm...> - 2023-07-03 11:51:28
|
Hi, what's the difference between: the exception thrown by PRM::constructSolution : "Could not find solution path " and: the enum base::PlannerStatus::TIMEOUT which means (according to the documentation) "The planner failed to find a solution" ? Thanks! |
From: hadar s. <had...@gm...> - 2023-05-08 10:26:38
|
Hello, I’m using the lazy prm for a single query. First, lazy prm is initialized with a graph previously generated with prm, but with a different IsValid function (which considers newly added obstacles). Planning with lazy prm planner works great, but sometimes during the run, one of the vertices chosen to be removed from nearest neighbor (nn) data structure happens to be a pivot, which leads to rebuilding nn data structure and substantially increases run time. Is there a way to avoid rebuilding? As I mentioned, the instance of lazy prm is used once, only for a single query, and no additional queries are sent. Best, Hadar. |
From: hadar s. <had...@gm...> - 2022-11-21 11:31:23
|
Hi, What is the persistent distance map that is mentioned in LazyPRM::constructSolution function and how can I use it in the LazyPRM planner? Best, Hadar. |
From: Aditya G. <ag...@vt...> - 2022-09-30 20:57:24
|
Here is one more update, and possible solution. I looked through RRTConnect code, and think that the if condition on line 240 of RRTConnect.cpp that is used to sample goal states is what is causing failure to sample all goal states. `if (tGoal_->size() == 0 || pis_.getSampledGoalsCount() < tGoal_->size() / 2)` [RRTConnect.cpp]( https://github.com/ompl/ompl/blob/9c3a20faaddfcd7f58fce235495f043ebee3e735/src/ompl/geometric/planners/rrt/src/RRTConnect.cpp ) That if condition means that unless the very first sampled goal state can be connected to some other state, no other goal states will be sampled. In the example I am working with, the first goal state is sitting extremely close to a large obstacle. Thus the likelihood of sampling a state between this goal state and the obstacle that can be connected to the the goal state is extremely low. As such, a tree cannot grow from the first goal state and no other goal states get sampled, and thus no solution is generated. I changed the if condition to the if condition used in RRTstar and that made RRTConnect work. This is how my change looks like `if (pis_.getSampledGoalsCount() < goal->maxSampleCount()) {` I would appreciate any input on this possible solution. Thank you for your help! On Sat, Sep 24, 2022 at 1:29 PM Aditya Gadre <ag...@vt...> wrote: > I am providing more details about the problem I am having with multiple > goal states. Following is the output that shows that simple setup indeed > has three goal states. As I had mentioned in my previous email, > samplePosition_ in GoalStates goes from 0 to 1 and then remains at 1. The > code used is attached. > > ====== PROGRAM OUTPUT START ====== > 3 goal states, threshold = 2.22045e-16, memory address = 0x7ff58d706e08 > Compound state [ > RealVectorState [50 50] > SO2State [0] > ] > > Compound state [ > RealVectorState [50 50] > SO2State [1.0472] > ] > > Compound state [ > RealVectorState [50 50] > SO2State [1.5708] > ] > > Debug: RRTConnect: Planner range detected to be 28.598431 > Info: RRTConnect: Starting planning with 1 states already in > datastructure > Info: RRTConnect: Created 9347 states (9346 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.005351 seconds > Here is the code I used for testing planning with multiple goal states. > ====== PROGRAM OUTPUT End ====== > > Thank you for your help! > > > On Fri, Sep 23, 2022 at 11:45 PM Aditya Gadre <ag...@vt...> wrote: > >> Hello, >> >> I am relatively new to OMPL. I am experimenting with planning paths using >> RRTconnect in Dubin's state space, and have been able to plan paths from a >> given starting state to the goal in the presence of obstacles. Now I am >> trying to use the GoalStates class to specify multiple goals, but somehow >> it is not working as expected. >> >> State-space: Dubin's >> Planner: RRTconnect >> Start: 1 start state >> Goal: 3 goal states in GoalStates >> >> This is how I have set up the problem. The first goal state is (x1, y1, >> yaw1), and there is an obstacle very near to (x1, y1) position such that >> approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and >> (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, >> but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that >> it is possible to compute paths from the start state to (x1, y1, yaw2) and >> (x1, y1, yaw3). I use the GoalStates object to add the three goal states. >> All the setup is done using SimpleSetup. >> >> auto goals = std::make_shared<ob::GoalStates>(si); >> goals->addState(goal_state); // infeasible goal state >> goals->addState(goal_state1); // feasible >> goals->addState(goal_state2); // feasible >> simple_setup.setGoal(goals); >> >> I thought that the RRTconnect planner would sample all the three goal >> states and would be able to plan a path to one of the two feasible goal >> states. However, it seems that only the first goal state was ever sampled. >> In the debugger I checked the goal details in the SimpleSetup object. It >> showed three goal states added, but the samplePosition_ remained unchanged >> at 1, indicating that only the first goal state was sampled. The console >> output also shows that there ever was one goal state used in planning, >> which leads to the planner not being able to compute a path as the yaw >> angle of the first goal state makes any approach to its x-y possible >> impossible. In the console output below, we can see the tree growing only >> from the start state but not from the goal as only the infeasible goal >> state is sampled. The type of solution after each iteration remains >> APPROXIMATE solution. >> >> Info: RRTConnect: Starting planning with 1 states already in >> datastructure >> Info: RRTConnect: Created 15246 states (15245 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.010254 seconds >> >> Info: RRTConnect: Starting planning with 15246 states already in >> datastructure >> Info: RRTConnect: Created 26487 states (26486 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.009778 seconds >> >> Info: RRTConnect: Starting planning with 26487 states already in >> datastructure >> Info: RRTConnect: Created 36780 states (36779 start + 1 goal) >> Info: ProblemDefinition: Adding approximate solution from planner >> RRTConnect >> Info: Solution found in 1.007346 seconds >> >> I am most likely missing something important in the setup, but cannot >> really determine what it is. I will greatly appreciate any help in this >> regard. >> >> Thanks! >> >> |
From: Aditya G. <ag...@vt...> - 2022-09-24 17:30:36
|
I am providing more details about the problem I am having with multiple goal states. Following is the output that shows that simple setup indeed has three goal states. As I had mentioned in my previous email, samplePosition_ in GoalStates goes from 0 to 1 and then remains at 1. The code used is attached. ====== PROGRAM OUTPUT START ====== 3 goal states, threshold = 2.22045e-16, memory address = 0x7ff58d706e08 Compound state [ RealVectorState [50 50] SO2State [0] ] Compound state [ RealVectorState [50 50] SO2State [1.0472] ] Compound state [ RealVectorState [50 50] SO2State [1.5708] ] Debug: RRTConnect: Planner range detected to be 28.598431 Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 9347 states (9346 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.005351 seconds Here is the code I used for testing planning with multiple goal states. ====== PROGRAM OUTPUT End ====== Thank you for your help! On Fri, Sep 23, 2022 at 11:45 PM Aditya Gadre <ag...@vt...> wrote: > Hello, > > I am relatively new to OMPL. I am experimenting with planning paths using > RRTconnect in Dubin's state space, and have been able to plan paths from a > given starting state to the goal in the presence of obstacles. Now I am > trying to use the GoalStates class to specify multiple goals, but somehow > it is not working as expected. > > State-space: Dubin's > Planner: RRTconnect > Start: 1 start state > Goal: 3 goal states in GoalStates > > This is how I have set up the problem. The first goal state is (x1, y1, > yaw1), and there is an obstacle very near to (x1, y1) position such that > approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and > (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, > but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that > it is possible to compute paths from the start state to (x1, y1, yaw2) and > (x1, y1, yaw3). I use the GoalStates object to add the three goal states. > All the setup is done using SimpleSetup. > > auto goals = std::make_shared<ob::GoalStates>(si); > goals->addState(goal_state); // infeasible goal state > goals->addState(goal_state1); // feasible > goals->addState(goal_state2); // feasible > simple_setup.setGoal(goals); > > I thought that the RRTconnect planner would sample all the three goal > states and would be able to plan a path to one of the two feasible goal > states. However, it seems that only the first goal state was ever sampled. > In the debugger I checked the goal details in the SimpleSetup object. It > showed three goal states added, but the samplePosition_ remained unchanged > at 1, indicating that only the first goal state was sampled. The console > output also shows that there ever was one goal state used in planning, > which leads to the planner not being able to compute a path as the yaw > angle of the first goal state makes any approach to its x-y possible > impossible. In the console output below, we can see the tree growing only > from the start state but not from the goal as only the infeasible goal > state is sampled. The type of solution after each iteration remains > APPROXIMATE solution. > > Info: RRTConnect: Starting planning with 1 states already in > datastructure > Info: RRTConnect: Created 15246 states (15245 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.010254 seconds > > Info: RRTConnect: Starting planning with 15246 states already in > datastructure > Info: RRTConnect: Created 26487 states (26486 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.009778 seconds > > Info: RRTConnect: Starting planning with 26487 states already in > datastructure > Info: RRTConnect: Created 36780 states (36779 start + 1 goal) > Info: ProblemDefinition: Adding approximate solution from planner > RRTConnect > Info: Solution found in 1.007346 seconds > > I am most likely missing something important in the setup, but cannot > really determine what it is. I will greatly appreciate any help in this > regard. > > Thanks! > > |
From: Aditya G. <ag...@vt...> - 2022-09-24 04:15:26
|
Hello, I am relatively new to OMPL. I am experimenting with planning paths using RRTconnect in Dubin's state space, and have been able to plan paths from a given starting state to the goal in the presence of obstacles. Now I am trying to use the GoalStates class to specify multiple goals, but somehow it is not working as expected. State-space: Dubin's Planner: RRTconnect Start: 1 start state Goal: 3 goal states in GoalStates This is how I have set up the problem. The first goal state is (x1, y1, yaw1), and there is an obstacle very near to (x1, y1) position such that approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that it is possible to compute paths from the start state to (x1, y1, yaw2) and (x1, y1, yaw3). I use the GoalStates object to add the three goal states. All the setup is done using SimpleSetup. auto goals = std::make_shared<ob::GoalStates>(si); goals->addState(goal_state); // infeasible goal state goals->addState(goal_state1); // feasible goals->addState(goal_state2); // feasible simple_setup.setGoal(goals); I thought that the RRTconnect planner would sample all the three goal states and would be able to plan a path to one of the two feasible goal states. However, it seems that only the first goal state was ever sampled. In the debugger I checked the goal details in the SimpleSetup object. It showed three goal states added, but the samplePosition_ remained unchanged at 1, indicating that only the first goal state was sampled. The console output also shows that there ever was one goal state used in planning, which leads to the planner not being able to compute a path as the yaw angle of the first goal state makes any approach to its x-y possible impossible. In the console output below, we can see the tree growing only from the start state but not from the goal as only the infeasible goal state is sampled. The type of solution after each iteration remains APPROXIMATE solution. Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 15246 states (15245 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.010254 seconds Info: RRTConnect: Starting planning with 15246 states already in datastructure Info: RRTConnect: Created 26487 states (26486 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.009778 seconds Info: RRTConnect: Starting planning with 26487 states already in datastructure Info: RRTConnect: Created 36780 states (36779 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.007346 seconds I am most likely missing something important in the setup, but cannot really determine what it is. I will greatly appreciate any help in this regard. Thanks! |
From: Aditya G. <ag...@vt...> - 2022-09-24 04:04:28
|
I am relatively new to OMPL. I am experimenting with planning paths using RRTconnect in Dubin's state space, and have been able to plan paths from given starting state to the goal in presence of obstacles. Now I am trying to use GoalStates class to specify multiple goals, but somehow it is not working as expected. State-space: Dubin's Planner: RRTconnect Start: 1 start state Goal: 3 goal states in GoalStates This is how I have set up the problem. The first goal state is (x1, y1, yaw1), and there is an obstacle very near to (x1, y1) position such that approach at yaw1 is blocked. The two other states are (x1, y1, yaw2) and (x1, y1, yaw3). Thus all the goal states have the same x and y coordinates, but different yaw angles. The two yaw angles, yaw2 and yaw3, are such that it is possible to compute paths from the start state to (x1, y1, yaw2) and (x1, y1, yaw3). I use GoalStates object to add the three goal states. All the setup is done using SimpleSetup. auto goals = std::make_shared<ob::GoalStates>(si); goals->addState(goal_state); // infeasible goal state goals->addState(goal_state1); // feasible goals->addState(goal_state2); // feasible simple_setup.setGoal(goals); I thought that the RRTconnect planner would sample all the three goal states and will be able to plan path to one of the two feasible goal states. However, it seems that only the first goal state was ever sampled. In the debugger I checked the goal details in the SimpleSetup object. It showed three goal states added, but the samplePosition_ remained unchanged at 1, indicating that only the first goal state was sampled. The console output also shows that there ever was one goal state used in planning, which leads to the planner not being able to compute a path as the yaw angle of the first goal state makes any approach the its x-y possible impossible. In the console output below, we can see tree growning only from start state but not from the goal as only the infeasible goal state is sampled. The type of solution after each itaration remains APPROXIMATE soluton. Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 15246 states (15245 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.010254 seconds Info: RRTConnect: Starting planning with 15246 states already in datastructure Info: RRTConnect: Created 26487 states (26486 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.009778 seconds Info: RRTConnect: Starting planning with 26487 states already in datastructure Info: RRTConnect: Created 36780 states (36779 start + 1 goal) Info: ProblemDefinition: Adding approximate solution from planner RRTConnect Info: Solution found in 1.007346 seconds I mostly likely missing something important in the setup, but cannot really determine what it is. I will greatly appreciate any help in thie regard. Thanks! |
From: Olzhas A. <olz...@gm...> - 2022-05-21 01:58:54
|
Hello everyone! How can I extend this enum? https://github.com/ompl/ompl/blob/main/src/ompl/base/StateSpaceTypes.h#L45 Kind regards, Olzhas Adiyatov email: olz...@gm... website: olzhas.github.io |
From: Adam F. <afi...@cs...> - 2022-01-12 10:37:36
|
Hi all, I posted about this on GitHub as well—not sure what the right place is. I am using OMPL version 1.5.2 through Python with Pybullet as the collision checker. Specifically, I am planning for a Franka arm. I pretty frequently find that when I plan for a problem—especially a difficult problem—the final state in the plan is very far off from the goal I gave the planner. I’ve tested this with the simple RRT + simple setup and have been running into the issue frequently. Is there some parameter to specify the acceptable region around the goal? I posted an example of the code I use in the GitHub issue here: https://github.com/ompl/ompl/issues/866 Hopefully there’s some easy fix on my end and I’m just doing something ill-advised. Thanks for your help, Adam -- Adam Fishman He/Him fishy.ai |
From: Mark M. <mm...@ri...> - 2021-12-01 22:25:57
|
Hi Olzhas, These constants were empirically chosen to make the Dubins state space pass the space test unit tests and sanity checks. We have not spent significant time trying to minimize them as much as possible, let alone derive analytic bounds. If you know they can be made much smaller and still pass the unit tests, please submit a pull request with the desired change. Best, Mark > On Nov 30, 2021, at 8:20 AM, Olzhas Adiyatov <olz...@gm...> wrote: > > Hello everyone! > > How were these two constants determined? > > https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48 <https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48> > https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49 <https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49> > > Kind regards, > Olzhas > _______________________________________________ > ompl-users mailing list > omp...@li... <mailto:omp...@li...> > https://lists.sourceforge.net/lists/listinfo/ompl-users <https://lists.sourceforge.net/lists/listinfo/ompl-users> |
From: Olzhas A. <olz...@gm...> - 2021-11-30 16:20:37
|
Hello everyone! How were these two constants determined? https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L48 https://github.com/ompl/ompl/blob/528f02cdc5ac785ba24e1dbdf1cf621a17020b7b/src/ompl/base/spaces/src/DubinsStateSpace.cpp#L49 Kind regards, Olzhas |
From: Kejia R. <kr...@ri...> - 2021-10-06 10:26:18
|
Hello Mark, Thank you for your response and suggestions. I am currently trying to implement my new planner class inheriting from the "RRT" class in python. When I override the main "solve" method, I still want to use the "RRT::Motion" class inside it. For example, in the main "solve" method of my own planner, I want to do something like below: class MyPlanner(oc.RRT): ... def solve(self, time_limit): ... motion = oc.RRT.Motion(self.si_) ... However, in python, I cannot find this "RRT::Motion" class and cannot get access to it. It looks like this class is defined as a protected inner class under RRT in cpp, and thus I am not sure whether the python bindings have generated correspondences even for these protected classes. So I am sending the email to ask for your help. Could you please help provide some suggestions on how I can access the "RRT::Motion" class or any other protected attributes/methods in python? Thank you! Best, Kejia On Sun, Sep 19, 2021 at 10:31 PM Mark Moll <mm...@ri...> wrote: > Hi Keija, > > The nearest neighbors data structure may not be a tree and even if it is, > its tree and the tree constructed by the RRT algorithm are not the same. > There is an implicit tree structure in RRT defined using the > RRT::Motion::parent pointers. You could need to do some additional > bookkeeping (possibly by extending the RRT::Motion class) to keep track of > a Motion instance’s depth and to loop over the leaves. > > Best, > > Mark > > > > On Sep 19, 2021, at 7:49 PM, Kejia Ren <kr...@ri...> wrote: > > Hello, > > We want to inherit from the RRT planner in OMPL to implement our own > planner. Basically, I think we just need to overwrite the main solve() > function since all other steps are the same. What we want to do inside the > solve() function is, we want to retrieve the depth of the tree structure (I > think it is an instance of the NearestNeighbors class inside RRT) and also > want to decide where the leaf nodes are and evaluate the state of each leaf > node. Below are a few lines of psuedocodes explaining what we want to do > with it. But we are not sure whether we have access to the tree depth and > whether the data structure used in RRT can decide where the leaf nodes are. > So we are sending this email to ask for help. Could you please give some > instructions or advice for how to do that? Thank you so much! > > solve(): > ... > if tree.depth > 10: > for n in leaf_nodes: > evaluate(n -> state) > ... > > Best, > Kejia > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > |
From: Kejia R. <kr...@ri...> - 2021-10-05 02:43:45
|
Hello, Below are some of my codes for setting up the planner. I created an instance called "ss" for the "SimpleSetup" class, and below is how I set the control sampling for the planner: space_info = ss.getSpaceInformation() space_info.setPropagationStepSize(0.5) space_info.setMinMaxControlDuration(1, 5) planner = oc.RRT(self.space_info) ss.setPlanner(planner) And inside my "propagate" function, I just tried to print out the duration at the very beginning as below, and found out the duration always takes the same value whenever the function is called: def propagate(near, controls, duration, state): print(duration) ... Most of the time the planner can successfully solve for a feasible path. So I think everything else looks fine, except for this control duration inconsistency issue as I mentioned before. Hope anyone can help figure out what could be wrong. Thanks! Best, Kejia On Fri, Oct 1, 2021 at 2:09 AM Kejia Ren <kr...@ri...> wrote: > Hello, > > We are using the RRT planner with controls in python. I just found out a > problem that, even if I already set the control duration to have different > values for min and max, the "duration" input to the "propagate" function > always takes the same value. Specifically, I set the space information of > my planner with "setPropagationStepSize(0.5)" and > "setMinMaxControlDuration(1, 5)", so based on my understanding, the control > duration should be sampled with five possible values {0.5, 1.0, 1.5, 2.0, > 2.5}. However, inside my "propagate" function I found the input for > "duration" always takes the smallest value which is 0.5. And after the plan > is solved, in the solution path the durations take different values. Right > now this weird problem is causing significant inconsistency between the > planned path and its actual execution. I have no idea where the problem > could come from, and don't know how to solve it since we are not going to > modify anything inside the main "solve" function. Hope anybody could help > provide any clues based on my description. Thank you! > > Best, > Kejia > |
From: Mark M. <mm...@ri...> - 2021-10-05 01:29:10
|
Hi Justin, If you know the pose and shape of the obstacles at any given time, then you can create a CompoundStateSpace with a TimeStateSpace as one of the components and the current StateSpace as the other component. In other words, you’d explicitly keep track of time as part of the state of the robot. You will probably have to create a custom motion validator that will ensure that the robot can only move forward in time (and be collision-free with obstacles). You may also need additional checks to make sure the robot’s speed doesn’t exceed any limits. Best, Mark > On Oct 3, 2021, at 11:48 PM, Justin Kottinger <juk...@co...> wrote: > > Hello, > > I am attempting to do rigid body planning with controls and time-dependent obstacles. To this end, I have implemented a state validity checker class that inherits the ob::StateValidityChecker class but also tracks when the obstacles are present. I am running into an issue where time might not be tracked within the ob::State object and no ob::StateValidityChecker member functions deal with time (to the best of my knowledge). Does anyone know of some clever ways overcome this challenge? > > Thank you in advance for your time, > > Justin |
From: Justin K. <juk...@co...> - 2021-10-04 07:20:36
|
Hello, I am attempting to do rigid body planning with controls and time-dependent obstacles. To this end, I have implemented a state validity checker class that inherits the ob::StateValidityChecker class but also tracks when the obstacles are present. I am running into an issue where time might not be tracked within the ob::State object and no ob::StateValidityChecker member functions deal with time (to the best of my knowledge). Does anyone know of some clever ways overcome this challenge? Thank you in advance for your time, Justin |
From: Kejia R. <kr...@ri...> - 2021-10-01 07:09:49
|
Hello, We are using the RRT planner with controls in python. I just found out a problem that, even if I already set the control duration to have different values for min and max, the "duration" input to the "propagate" function always takes the same value. Specifically, I set the space information of my planner with "setPropagationStepSize(0.5)" and "setMinMaxControlDuration(1, 5)", so based on my understanding, the control duration should be sampled with five possible values {0.5, 1.0, 1.5, 2.0, 2.5}. However, inside my "propagate" function I found the input for "duration" always takes the smallest value which is 0.5. And after the plan is solved, in the solution path the durations take different values. Right now this weird problem is causing significant inconsistency between the planned path and its actual execution. I have no idea where the problem could come from, and don't know how to solve it since we are not going to modify anything inside the main "solve" function. Hope anybody could help provide any clues based on my description. Thank you! Best, Kejia |
From: Mark M. <mm...@ri...> - 2021-09-20 03:31:46
|
Hi Keija, The nearest neighbors data structure may not be a tree and even if it is, its tree and the tree constructed by the RRT algorithm are not the same. There is an implicit tree structure in RRT defined using the RRT::Motion::parent pointers. You could need to do some additional bookkeeping (possibly by extending the RRT::Motion class) to keep track of a Motion instance’s depth and to loop over the leaves. Best, Mark > On Sep 19, 2021, at 7:49 PM, Kejia Ren <kr...@ri...> wrote: > > Hello, > > We want to inherit from the RRT planner in OMPL to implement our own planner. Basically, I think we just need to overwrite the main solve() function since all other steps are the same. What we want to do inside the solve() function is, we want to retrieve the depth of the tree structure (I think it is an instance of the NearestNeighbors class inside RRT) and also want to decide where the leaf nodes are and evaluate the state of each leaf node. Below are a few lines of psuedocodes explaining what we want to do with it. But we are not sure whether we have access to the tree depth and whether the data structure used in RRT can decide where the leaf nodes are. So we are sending this email to ask for help. Could you please give some instructions or advice for how to do that? Thank you so much! > > solve(): > ... > if tree.depth > 10: > for n in leaf_nodes: > evaluate(n -> state) > ... > > Best, > Kejia > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Kejia R. <kr...@ri...> - 2021-09-20 03:18:14
|
Hello, We want to inherit from the RRT planner in OMPL to implement our own planner. Basically, I think we just need to overwrite the main solve() function since all other steps are the same. What we want to do inside the solve() function is, we want to retrieve the depth of the tree structure (I think it is an instance of the NearestNeighbors class inside RRT) and also want to decide where the leaf nodes are and evaluate the state of each leaf node. Below are a few lines of psuedocodes explaining what we want to do with it. But we are not sure whether we have access to the tree depth and whether the data structure used in RRT can decide where the leaf nodes are. So we are sending this email to ask for help. Could you please give some instructions or advice for how to do that? Thank you so much! solve(): ... if tree.depth > 10: for n in leaf_nodes: evaluate(n -> state) ... Best, Kejia |
From: Emre C. G. (S. <emr...@sa...> - 2021-09-03 23:05:22
|
Hello, How about goal sampling on a separate thread using threading module of Python, where sampled goal states are added to an ob.GoalStates class? I think it is working. Can you check the following demo that I modified from RigidBodyPlannning? *My Code:* import threading import time import numpy as np from ompl import base as ob from ompl import geometric as og def isStateValid(state): if 0.2 <= state[0] <= 0.8 and 0.2 <= state[1] <= 0.8: return False else: return True def sampleFunc(): t = 1000 * time.time() np.random.seed(int(t) % 2 ** 32) random_state = np.random.uniform(low=np.array([0.75, 0.75]), high=np.array([0.95, 0.95]), size=(2,)) if isStateValid(random_state): return random_state else: return None def add_goal(stop, ss): while True: sgoal = ob.State(ss) random_state = sampleFunc() if random_state is not None: for i in range(2): sgoal[i] = random_state[i] goal.addState(sgoal) if stop(): break if __name__ == '__main__': # create state space and add bounds space = ob.RealVectorStateSpace(2) bounds = ob.RealVectorBounds(2) bounds.setLow(0.) bounds.setHigh(1.) space.setBounds(bounds) # set state validity checker ss = og.SimpleSetup(space) ss.setStateValidityChecker(ob.StateValidityCheckerFn(isStateValid)) # define start state sstart = ob.State(space) for i in range(2): sstart[i] = 0. # define goal as global variable global goal # goal as GoalStates goal = ob.GoalStates(ss.getSpaceInformation()) # Initially no goal states in goal print("Initial Number of Goal States: ", goal.getStateCount()) # define goal thread and start stop_thread = False goal_thread = threading.Thread(target=add_goal, args=(lambda: stop_thread, space,)) goal_thread.start() # define and set an optimum planner to see if it finds different goals planner = og.RRTstar(ss.getSpaceInformation()) planner.setProblemDefinition(ss.getProblemDefinition()) ss.setPlanner(planner) # set start state and the goal ss.setStartState(sstart) ss.setGoal(goal) # check how many goal states are found before planning print("Number of Goal States: ", goal.getStateCount()) # start planning solved = ss.solve(10.0) if solved: # stop thread stop_thread = True goal_thread.join() # total number of states print("Final Number of Goal States: ", goal.getStateCount()) # print the simplified path print(ss.getSolutionPath()) *The output:* Initial Number of Goal States: 0 Number of Goal States: 68 Debug: RRTstar: Planner range detected to be 0.282843 Info: RRTstar: No optimization objective specified. Defaulting to optimizing path length for the allowed planning time. Info: RRTstar: Started planning with 1 states. Seeking a solution better than 0.00000. Info: RRTstar: Initial k-nearest value of 25 Info: RRTstar: Found an initial solution with a cost of 1.61 in 66 iterations (24 vertices in the graph) Info: RRTstar: Created 708 new states. Checked 130827 rewire options. 52 goal states in tree. Final solution cost 1.460 Info: Solution found in 10.038575 seconds Final Number of Goal States: 93746 Geometric path with 8 states RealVectorState [0 0] RealVectorState [0.188465 0.0446338] RealVectorState [0.463898 0.105294] RealVectorState [0.578831 0.131403] RealVectorState [0.808084 0.189118] RealVectorState [0.866363 0.437726] RealVectorState [0.896772 0.648098] RealVectorState [0.921206 0.808713] On Mon, Aug 30, 2021 at 6:55 PM Mark Moll <mm...@ri...> wrote: > Hi Emre, > > This may be difficult to do. GoalLazySamples spawns a separate thread to > generate valid goal samples. If this thread also calls Python code (e.g., > if your StateValidityChecker is written in Python), then you run into > trouble with Python’s Global Interpreter Lock (GIL). This is a limitation > of the current bindings. > > Best, > > Mark > > > > On Aug 29, 2021, at 11:56 AM, Emre Cemal Gönen (Student) < > emr...@sa...> wrote: > > <image.gif>Hello, > > Can you provide an example of how to define GoalLazySamples using Python > Bindings? > > I define it in the following way, but I get an error of > > terminate called after throwing an instance of > 'boost::python::error_already_set' > Aborted (core dumped) > > > My GoalLazySamples definition: > > class GoalLRegion(ob.GoalLazySamples): > def __init__(self, si, samplerFunc, goalRegionParam): > super(GoalLRegion, self).__init__(si, samplerFunc) > self.mySampler = ob.ValidStateSampler(si) > self.goalRegionParam = goalRegionParam > > def distanceGoal(self, state): > d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ > self.goalRegionParam[2] > return d > > def sampleGoal(self, state): > self.mySampler.sample(state) > > def maxSampleCount(self): > return 100 > > def mySamplingFunction(self, gls, state): > return True > > > goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) > > > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > ------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > <http://www.sabanciuniv.edu/> > > <image.gif> > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: Zachary K. <za...@ri...> - 2021-09-01 14:31:23
|
Notice that the paths after simplification only have 2 states - these two states are the start and goal, which means your path in each of the three cases is simply the "straight line" from the start to the goal. My (guess) explanation for why there are a different number of states in the search tree and unsimplified path in each case is that goal biasing occurred at different iterations, but solved the problem as soon as a goal sample was used. Best, Zak On 9/1/21 8:14 AM, Emre Cemal Gönen (Student) wrote: > I have a question about path length obtained after RRT planning using > Constrained Planning where my initial and goal states are > RealVectorStates. Although I try to solve the same planning > problemimage.gif many times, I get the same path length with the same > simplified path lengths. Is it normal? Or, what should I do to obtain > path lengths? > > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 18 states > Info: Solution found in 5.308511 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.066612 seconds and > changed from 4 to 2 states > Info: Simplified Path Length: *1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 44 states > Info: Solution found in 6.888629 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.064105 seconds and > changed from 6 to 2 states > Info: Simplified Path Length:*1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 36 states > Info: Solution found in 8.470227 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.000002 seconds and > changed from 2 to 2 states > Info: Simplified Path Length: *1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > > Kindest Regards, > E. Cemal Gonen > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > > ------------------------------------------------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > > <http://www.sabanciuniv.edu> > > > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Emre C. G. (S. <emr...@sa...> - 2021-09-01 14:19:13
|
I have a question about path length obtained after RRT planning using Constrained Planning where my initial and goal states are RealVectorStates. Although I try to solve the same planning problem[image: image.gif] many times, I get the same path length with the same simplified path lengths. Is it normal? Or, what should I do to obtain path lengths? Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 18 states Info: Solution found in 5.308511 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.066612 seconds and changed from 4 to 2 states Info: Simplified Path Length: *1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 44 states Info: Solution found in 6.888629 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.064105 seconds and changed from 6 to 2 states Info: Simplified Path Length:* 1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 36 states Info: Solution found in 8.470227 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.000002 seconds and changed from 2 to 2 states Info: Simplified Path Length: *1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Kindest Regards, E. Cemal Gonen -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: Mark M. <mm...@ri...> - 2021-08-30 18:29:02
|
Hi Emre, This may be difficult to do. GoalLazySamples spawns a separate thread to generate valid goal samples. If this thread also calls Python code (e.g., if your StateValidityChecker is written in Python), then you run into trouble with Python’s Global Interpreter Lock (GIL). This is a limitation of the current bindings. Best, Mark > On Aug 29, 2021, at 11:56 AM, Emre Cemal Gönen (Student) <emr...@sa...> wrote: > > <image.gif>Hello, > > Can you provide an example of how to define GoalLazySamples using Python Bindings? > > I define it in the following way, but I get an error of > > terminate called after throwing an instance of 'boost::python::error_already_set' > Aborted (core dumped) > > > My GoalLazySamples definition: > class GoalLRegion(ob.GoalLazySamples): > def __init__(self, si, samplerFunc, goalRegionParam): > super(GoalLRegion, self).__init__(si, samplerFunc) > self.mySampler = ob.ValidStateSampler(si) > self.goalRegionParam = goalRegionParam > > def distanceGoal(self, state): > d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ > self.goalRegionParam[2] > return d > > def sampleGoal(self, state): > self.mySampler.sample(state) > > def maxSampleCount(self): > return 100 > > def mySamplingFunction(self, gls, state): > return True > > > goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) > > -- > EMRE CEMAL GÖNEN > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > T 0 216 483 90 00 - 2331 > www.sabanciuniv.edu <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu/> > > <image.gif> > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Emre C. G. (S. <emr...@sa...> - 2021-08-29 19:02:34
|
[image: image.gif]Hello, Can you provide an example of how to define GoalLazySamples using Python Bindings? I define it in the following way, but I get an error of terminate called after throwing an instance of 'boost::python::error_already_set' Aborted (core dumped) My GoalLazySamples definition: class GoalLRegion(ob.GoalLazySamples): def __init__(self, si, samplerFunc, goalRegionParam): super(GoalLRegion, self).__init__(si, samplerFunc) self.mySampler = ob.ValidStateSampler(si) self.goalRegionParam = goalRegionParam def distanceGoal(self, state): d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ self.goalRegionParam[2] return d def sampleGoal(self, state): self.mySampler.sample(state) def maxSampleCount(self): return 100 def mySamplingFunction(self, gls, state): return True goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> [image: image.gif] |
From: Zachary K. <za...@ri...> - 2021-08-11 14:45:09
|
Emre, Your setup on paper sounds like it should work. Some ideas: 1. Make sure that the sampler allocated through allocDefaultStateSampler is for the correct space if you are using the constrained state space. That is, make sure that the samplers are sampling for the appropriate state types. A segfault could be occuring due to using a constrained state (which wraps the underlying state) in the underlying space and vice versa. 2. I recommend that you use GoalLazySamples <https://ompl.kavrakilab.org/classompl_1_1base_1_1GoalLazySamples.html> (and implement a GoalSamplingFn) rather than GoalSampleableRegion, as it could take a while to find a constraint satisfying configuration with projection-based sampling. 3. I would also recommend not using the ProjectedStateSampler and instead just using the state space sampler and constraint directly, as you should check if the constraint is satisfied before returning the found configuration as a valid goal. All the ProjectedStateSampler does is a) call the underlying space sampler, b) project using the constraint, and c) check if bounds are satisfied. Best, Zak On 8/10/21 3:48 PM, Emre Cemal Gönen (Student) wrote: > Hello, > > I try to define a GoalSampleableRegion for a constrained planning > problem. Since the goal configuration includes additional constraints > for the planning constraints, I define two constraint objects for both > planning and goal. My idea is to create a motion plan using planning > constraints, and the goal region is defined based on the samples > generated from the goal constraints. > > For the definition of the GoalSampleableRegion, > > * distanceGoal method is chosen as a parameter of > goal_constraint.distance(state) > * sampleGoal method tries to use available samplers, which my > question is > > For sampling, rather than creating my own sampling method, I prefer to > use available samplers that are used for the planning of constrained > planning problems. I tried to use ProjectedStateSampler that takes > Constrained State Space of the goal constraint and > allocDefaultStateSampler from the state space as StateSamplerPtr. When > I call sampleUniform() method from the ProjectedStateSampler I > defined, I get the Segmentation fault (core dumped) error. Do you > think I do something wrong? Or, are there any ways to do it more > conveniently to solve my problem? > > Thank you, > Kindest Regards, > E. Cemal Gonen > > > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > > ------------------------------------------------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > > <http://www.sabanciuniv.edu> > > > image.gif > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |