You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(4) |
Feb
(6) |
Mar
(16) |
Apr
(9) |
May
(6) |
Jun
(2) |
Jul
|
Aug
(14) |
Sep
(10) |
Oct
(12) |
Nov
(15) |
Dec
(3) |
2012 |
Jan
(15) |
Feb
(9) |
Mar
(10) |
Apr
(19) |
May
(20) |
Jun
(14) |
Jul
(1) |
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(2) |
2013 |
Jan
(13) |
Feb
(8) |
Mar
(11) |
Apr
(20) |
May
(11) |
Jun
(11) |
Jul
(12) |
Aug
(3) |
Sep
|
Oct
(7) |
Nov
(9) |
Dec
(1) |
2014 |
Jan
(1) |
Feb
(4) |
Mar
(13) |
Apr
(5) |
May
(10) |
Jun
(27) |
Jul
(17) |
Aug
|
Sep
(8) |
Oct
(12) |
Nov
(19) |
Dec
(31) |
2015 |
Jan
(21) |
Feb
(11) |
Mar
(15) |
Apr
(1) |
May
(15) |
Jun
(11) |
Jul
(23) |
Aug
(6) |
Sep
(28) |
Oct
(17) |
Nov
(18) |
Dec
(1) |
2016 |
Jan
(14) |
Feb
(42) |
Mar
(20) |
Apr
(10) |
May
(11) |
Jun
(3) |
Jul
(9) |
Aug
(6) |
Sep
(6) |
Oct
(8) |
Nov
|
Dec
(1) |
2017 |
Jan
(3) |
Feb
(1) |
Mar
(2) |
Apr
(6) |
May
(11) |
Jun
(5) |
Jul
(1) |
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
(8) |
2018 |
Jan
(8) |
Feb
(1) |
Mar
|
Apr
(2) |
May
(3) |
Jun
(8) |
Jul
(21) |
Aug
(7) |
Sep
(12) |
Oct
(4) |
Nov
(3) |
Dec
(2) |
2019 |
Jan
(9) |
Feb
(7) |
Mar
(5) |
Apr
(1) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(2) |
Oct
(5) |
Nov
(2) |
Dec
(5) |
2020 |
Jan
(5) |
Feb
(5) |
Mar
(9) |
Apr
|
May
(2) |
Jun
(10) |
Jul
(7) |
Aug
(1) |
Sep
|
Oct
(3) |
Nov
|
Dec
(1) |
2021 |
Jan
(4) |
Feb
(5) |
Mar
(2) |
Apr
(5) |
May
(1) |
Jun
|
Jul
(2) |
Aug
(13) |
Sep
(5) |
Oct
(5) |
Nov
(1) |
Dec
(1) |
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(1) |
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(2) |
Dec
(1) |
2025 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Zachary K. <za...@ri...> - 2021-09-01 14:31:23
|
Notice that the paths after simplification only have 2 states - these two states are the start and goal, which means your path in each of the three cases is simply the "straight line" from the start to the goal. My (guess) explanation for why there are a different number of states in the search tree and unsimplified path in each case is that goal biasing occurred at different iterations, but solved the problem as soon as a goal sample was used. Best, Zak On 9/1/21 8:14 AM, Emre Cemal Gönen (Student) wrote: > I have a question about path length obtained after RRT planning using > Constrained Planning where my initial and goal states are > RealVectorStates. Although I try to solve the same planning > problemimage.gif many times, I get the same path length with the same > simplified path lengths. Is it normal? Or, what should I do to obtain > path lengths? > > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 18 states > Info: Solution found in 5.308511 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.066612 seconds and > changed from 4 to 2 states > Info: Simplified Path Length: *1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 44 states > Info: Solution found in 6.888629 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.064105 seconds and > changed from 6 to 2 states > Info: Simplified Path Length:*1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > > Info: Using Projection-Based State Space! > Debug: RRT: Planner range detected to be 3.305002 > Info: RRT: Starting planning with 1 states already in datastructure > Info: RRT: Created 36 states > Info: Solution found in 8.470227 seconds > Info: Simplifying solution... > Info: SimpleSetup: Path simplification took 0.000002 seconds and > changed from 2 to 2 states > Info: Simplified Path Length: *1.934 -> 1.934* > Info: Interpolating path... > Info: Interpolating simplified path... > > > Kindest Regards, > E. Cemal Gonen > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > > ------------------------------------------------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > > <http://www.sabanciuniv.edu> > > > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Emre C. G. (S. <emr...@sa...> - 2021-09-01 14:19:13
|
I have a question about path length obtained after RRT planning using Constrained Planning where my initial and goal states are RealVectorStates. Although I try to solve the same planning problem[image: image.gif] many times, I get the same path length with the same simplified path lengths. Is it normal? Or, what should I do to obtain path lengths? Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 18 states Info: Solution found in 5.308511 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.066612 seconds and changed from 4 to 2 states Info: Simplified Path Length: *1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 44 states Info: Solution found in 6.888629 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.064105 seconds and changed from 6 to 2 states Info: Simplified Path Length:* 1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Info: Using Projection-Based State Space! Debug: RRT: Planner range detected to be 3.305002 Info: RRT: Starting planning with 1 states already in datastructure Info: RRT: Created 36 states Info: Solution found in 8.470227 seconds Info: Simplifying solution... Info: SimpleSetup: Path simplification took 0.000002 seconds and changed from 2 to 2 states Info: Simplified Path Length: *1.934 -> 1.934* Info: Interpolating path... Info: Interpolating simplified path... Kindest Regards, E. Cemal Gonen -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: Mark M. <mm...@ri...> - 2021-08-30 18:29:02
|
Hi Emre, This may be difficult to do. GoalLazySamples spawns a separate thread to generate valid goal samples. If this thread also calls Python code (e.g., if your StateValidityChecker is written in Python), then you run into trouble with Python’s Global Interpreter Lock (GIL). This is a limitation of the current bindings. Best, Mark > On Aug 29, 2021, at 11:56 AM, Emre Cemal Gönen (Student) <emr...@sa...> wrote: > > <image.gif>Hello, > > Can you provide an example of how to define GoalLazySamples using Python Bindings? > > I define it in the following way, but I get an error of > > terminate called after throwing an instance of 'boost::python::error_already_set' > Aborted (core dumped) > > > My GoalLazySamples definition: > class GoalLRegion(ob.GoalLazySamples): > def __init__(self, si, samplerFunc, goalRegionParam): > super(GoalLRegion, self).__init__(si, samplerFunc) > self.mySampler = ob.ValidStateSampler(si) > self.goalRegionParam = goalRegionParam > > def distanceGoal(self, state): > d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ > self.goalRegionParam[2] > return d > > def sampleGoal(self, state): > self.mySampler.sample(state) > > def maxSampleCount(self): > return 100 > > def mySamplingFunction(self, gls, state): > return True > > > goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) > > -- > EMRE CEMAL GÖNEN > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > T 0 216 483 90 00 - 2331 > www.sabanciuniv.edu <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu/> > > <image.gif> > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Emre C. G. (S. <emr...@sa...> - 2021-08-29 19:02:34
|
[image: image.gif]Hello, Can you provide an example of how to define GoalLazySamples using Python Bindings? I define it in the following way, but I get an error of terminate called after throwing an instance of 'boost::python::error_already_set' Aborted (core dumped) My GoalLazySamples definition: class GoalLRegion(ob.GoalLazySamples): def __init__(self, si, samplerFunc, goalRegionParam): super(GoalLRegion, self).__init__(si, samplerFunc) self.mySampler = ob.ValidStateSampler(si) self.goalRegionParam = goalRegionParam def distanceGoal(self, state): d = np.sqrt((state[0] - self.goalRegionParam[0]) ** 2 + (state[0] - self.goalRegionParam[1]) ** 2) - \ self.goalRegionParam[2] return d def sampleGoal(self, state): self.mySampler.sample(state) def maxSampleCount(self): return 100 def mySamplingFunction(self, gls, state): return True goal = GoalLRegion(ss.getSpaceInformation(), ob.GoalSamplingFn(GoalLRegion.mySamplingFunction), goalRegionParam) -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> [image: image.gif] |
From: Zachary K. <za...@ri...> - 2021-08-11 14:45:09
|
Emre, Your setup on paper sounds like it should work. Some ideas: 1. Make sure that the sampler allocated through allocDefaultStateSampler is for the correct space if you are using the constrained state space. That is, make sure that the samplers are sampling for the appropriate state types. A segfault could be occuring due to using a constrained state (which wraps the underlying state) in the underlying space and vice versa. 2. I recommend that you use GoalLazySamples <https://ompl.kavrakilab.org/classompl_1_1base_1_1GoalLazySamples.html> (and implement a GoalSamplingFn) rather than GoalSampleableRegion, as it could take a while to find a constraint satisfying configuration with projection-based sampling. 3. I would also recommend not using the ProjectedStateSampler and instead just using the state space sampler and constraint directly, as you should check if the constraint is satisfied before returning the found configuration as a valid goal. All the ProjectedStateSampler does is a) call the underlying space sampler, b) project using the constraint, and c) check if bounds are satisfied. Best, Zak On 8/10/21 3:48 PM, Emre Cemal Gönen (Student) wrote: > Hello, > > I try to define a GoalSampleableRegion for a constrained planning > problem. Since the goal configuration includes additional constraints > for the planning constraints, I define two constraint objects for both > planning and goal. My idea is to create a motion plan using planning > constraints, and the goal region is defined based on the samples > generated from the goal constraints. > > For the definition of the GoalSampleableRegion, > > * distanceGoal method is chosen as a parameter of > goal_constraint.distance(state) > * sampleGoal method tries to use available samplers, which my > question is > > For sampling, rather than creating my own sampling method, I prefer to > use available samplers that are used for the planning of constrained > planning problems. I tried to use ProjectedStateSampler that takes > Constrained State Space of the goal constraint and > allocDefaultStateSampler from the state space as StateSamplerPtr. When > I call sampleUniform() method from the ProjectedStateSampler I > defined, I get the Segmentation fault (core dumped) error. Do you > think I do something wrong? Or, are there any ways to do it more > conveniently to solve my problem? > > Thank you, > Kindest Regards, > E. Cemal Gonen > > > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > > ------------------------------------------------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > > <http://www.sabanciuniv.edu> > > > image.gif > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Emre C. G. (S. <emr...@sa...> - 2021-08-10 20:55:10
|
Hello, I try to define a GoalSampleableRegion for a constrained planning problem. Since the goal configuration includes additional constraints for the planning constraints, I define two constraint objects for both planning and goal. My idea is to create a motion plan using planning constraints, and the goal region is defined based on the samples generated from the goal constraints. For the definition of the GoalSampleableRegion, - distanceGoal method is chosen as a parameter of goal_constraint.distance(state) - sampleGoal method tries to use available samplers, which my question is For sampling, rather than creating my own sampling method, I prefer to use available samplers that are used for the planning of constrained planning problems. I tried to use ProjectedStateSampler that takes Constrained State Space of the goal constraint and allocDefaultStateSampler from the state space as StateSamplerPtr. When I call sampleUniform() method from the ProjectedStateSampler I defined, I get the Segmentation fault (core dumped) error. Do you think I do something wrong? Or, are there any ways to do it more conveniently to solve my problem? Thank you, Kindest Regards, E. Cemal Gonen -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> [image: image.gif] |
From: Mark M. <mm...@ri...> - 2021-08-05 18:30:55
|
Hi Cemal, On Aug 4, 2021, at 2:57 PM, E. Cemal Gonen <emr...@sa...> wrote: > While we define our problem in constrained motion planning, we defining the geometric constraints as mathematical equations, and the sampler creates samples based on the constrained methodology we chose. Is there a way to define some "soft constraints" meaning constraints that are preferred? For example, consider a 2 dof manipulator on the plane, and how can we define our preference on elbow configuration of the manipulator. Although it can be set by validity, this eliminates the other elbow configuration. I wonder if there is a proper way to do this. For soft constraints you can use the cost infrastructure in OMPL to define the cost for states/motions. The cost is defined using a OptimizationObjective-derived class. There are lots of examples predefined. See this page <https://ompl.kavrakilab.org/optimalPlanning.html> for details. Best, Mark |
From: Mark M. <mm...@ri...> - 2021-08-05 18:25:50
|
Hi Cemal, On Aug 4, 2021, at 2:59 PM, E. Cemal Gonen <emr...@sa...> wrote: > I am using GoalSampleableRegion for my planning problem. The SpaceInformation is extracted from a constrained problem definition. Then, I set goal using setGoal() method. Is there a way to get each goal samples sampled by the GoalSampleableRegion, and how? Please look at the GoalSampleableRegion-derived class GoalLazySamples <https://ompl.kavrakilab.org/classompl_1_1base_1_1GoalLazySamples.html> as an example of how this could be done. That class allows you to specify a callback function that gets called anytime a new goal state is generated. Best, Mark |
From: E. C. G. <emr...@sa...> - 2021-08-04 22:28:03
|
Sure, i did 5 Ağu 2021 Per, saat 00:57 tarihinde E. Cemal Gonen < emr...@sa...> şunu yazdı: > > > ---------- Yönlendirilen ileti --------- > Gönderen: Mark Moll <mm...@ri...> > Tarih: 5 Ağu 2021 Per, saat 00:53 > Konu: Re: OMPL contact form submission > Alıcı: <emr...@sa...> > > > > Best, > > Mark > > > > On Aug 4, 2021, at 1:27 PM, Formspree <no...@fo...> wrote: > > *New form submission* > Someone just submitted a form using formspree.io. Woo! > name > E. Cemal Gonen > _replyto > emr...@sa... > message > Hello, > > While we define our problem in constrained motion planning, we defining > the geometric constraints as mathematical equations, and the sampler > creates samples based on the constrained methodology we chose. Is there a > way to define some "soft constraints" meaning constraints that are > preferred? For example, consider a 2 dof manipulator on the plane, and how > can we define our preference on elbow configuration of the manipulator. > Although it can be set by validity, this eliminates the other elbow > configuration. I wonder if there is a proper way to do this. > > Kindest regards, > E. Cemal Gonen > IP > submit > Send > Submitted 08:27 PM - 04 August 2021 > Mark as spam > <https://formspree.io/mark-spam?id=MTcwMDA4MDk.Mu8ovV16rqBrqhoxYJgSOs8OW1E> > > [image: Formspree logo] <https://formspree.io/> > > You are receiving this because you confirmed this email address on > Formspree. > *Don't want these emails anymore? No problem!* > Simply remove the form on formspree.io or unsubscribe from this form's > notifications > <https://formspree.io/unsubscribe/d6ce5fae7e2febdc7bcb62c4e8a73a8472e5c6197fa8b489e4dbc3a98139cd01/MTYxMTQ0.m66WmM93B8x46QaCHPY3-RrnPhA?email=mmoll%40rice.edu> > . > > > -- > *EMRE CEMAL GÖNEN* > YÜKSEK LİSANS ÖĞRENCİSİ > GRADUATE STUDENT > ------------------------------ > Sabancı Üniversitesi > Üniversite Caddesi No:27 > 34956 Orta Mahalle > Tuzla - İstanbul > *T* 0 216 483 90 00 - 2331 > *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> > <http://www.sabanciuniv.edu> > -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: E. C. G. <emr...@sa...> - 2021-08-04 22:22:22
|
---------- Yönlendirilen ileti --------- Gönderen: Mark Moll <mm...@ri...> Tarih: 5 Ağu 2021 Per, saat 00:53 Konu: Re: OMPL contact form submission Alıcı: <emr...@sa...> Can you resubmit your question to omp...@li... so that others can also learn from this? Best, Mark On Aug 4, 2021, at 1:11 PM, Formspree <no...@fo...> wrote: *New form submission* Someone just submitted a form using formspree.io. Woo! name E. Cemal Gonen _replyto emr...@sa... message Hello, I am using GoalSampleableRegion for my planning problem. The SpaceInformation is extracted from a constrained problem definition. Then, I set goal using setGoal() method. Is there a way to get each goal samples sampled by the GoalSampleableRegion, and how? Thank you very much for your answer. Kindest Regards, E. Cemal Gonen IP submit Send Submitted 08:11 PM - 04 August 2021 Mark as spam <https://formspree.io/mark-spam?id=MTcwMDA0OTk.IoozV9InfqbmtPdFDBmYntHrcKs> [image: Formspree logo] <https://formspree.io/> You are receiving this because you confirmed this email address on Formspree. *Don't want these emails anymore? No problem!* Simply remove the form on formspree.io or unsubscribe from this form's notifications <https://formspree.io/unsubscribe/d6ce5fae7e2febdc7bcb62c4e8a73a8472e5c6197fa8b489e4dbc3a98139cd01/MTYxMTQ0.m66WmM93B8x46QaCHPY3-RrnPhA?email=mmoll%40rice.edu> . -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: E. C. G. <emr...@sa...> - 2021-08-04 22:19:55
|
---------- Yönlendirilen ileti --------- Gönderen: Mark Moll <mm...@ri...> Tarih: 5 Ağu 2021 Per, saat 00:53 Konu: Re: OMPL contact form submission Alıcı: <emr...@sa...> Best, Mark On Aug 4, 2021, at 1:27 PM, Formspree <no...@fo...> wrote: *New form submission* Someone just submitted a form using formspree.io. Woo! name E. Cemal Gonen _replyto emr...@sa... message Hello, While we define our problem in constrained motion planning, we defining the geometric constraints as mathematical equations, and the sampler creates samples based on the constrained methodology we chose. Is there a way to define some "soft constraints" meaning constraints that are preferred? For example, consider a 2 dof manipulator on the plane, and how can we define our preference on elbow configuration of the manipulator. Although it can be set by validity, this eliminates the other elbow configuration. I wonder if there is a proper way to do this. Kindest regards, E. Cemal Gonen IP submit Send Submitted 08:27 PM - 04 August 2021 Mark as spam <https://formspree.io/mark-spam?id=MTcwMDA4MDk.Mu8ovV16rqBrqhoxYJgSOs8OW1E> [image: Formspree logo] <https://formspree.io/> You are receiving this because you confirmed this email address on Formspree. *Don't want these emails anymore? No problem!* Simply remove the form on formspree.io or unsubscribe from this form's notifications <https://formspree.io/unsubscribe/d6ce5fae7e2febdc7bcb62c4e8a73a8472e5c6197fa8b489e4dbc3a98139cd01/MTYxMTQ0.m66WmM93B8x46QaCHPY3-RrnPhA?email=mmoll%40rice.edu> . -- *EMRE CEMAL GÖNEN* YÜKSEK LİSANS ÖĞRENCİSİ GRADUATE STUDENT ------------------------------ Sabancı Üniversitesi Üniversite Caddesi No:27 34956 Orta Mahalle Tuzla - İstanbul *T* 0 216 483 90 00 - 2331 *www.sabanciuniv.edu* <https://www.sabanciuniv.edu/> <http://www.sabanciuniv.edu> |
From: Zachary K. <za...@ri...> - 2021-08-04 16:34:08
|
Additionally, if you are looking to find a path that is optimal with respect to the distance traveled by the end-effector, you could implement your own optimization objective (take a look at the tutorial here <http://ompl.kavrakilab.org/optimizationObjectivesTutorial.html>, for C++, and this demo <http://ompl.kavrakilab.org/OptimalPlanning_8py_source.html> for Python) which would give the distance traveled by the end-effector in the workspace. Zak On 8/4/21 10:40 AM, Mark Moll wrote: > Hi Adam, > > Expanding on Jonathan just wrote, you can combine his idea with the > type of local optimization that is typically done in a post processing > stage. There is a PathSimplifier::findBetterGoal > <https://github.com/ompl/ompl/blob/main/src/ompl/geometric/PathSimplifier.h#L219> method > that gets called by PathSimplifier::simplify to find a better IK > solution that can be directed to the rest of the path. Here, “better” > means the overall path cost (not just the cost of a state) as defined > by the optimization objective (which is path length by default). For > this to work, you need a goal of a type derived from > GoalSampleableRegion > <http://ompl.kavrakilab.org/classompl_1_1base_1_1GoalSampleableRegion.html>. > The derived class GoalLazySamples might be a good start; all you need > is a functor that returns IK solutions. > > Re. constrained planning, that often assumes the constraints apply to > the whole path, not just the goal state, so this might not be what you > want. (Side note: although it’s often convenient to reason > lower-dimensional constraint manifolds, the ProjectedStateSpace > <http://ompl.kavrakilab.org/classompl_1_1base_1_1ProjectedStateSpace.html> does > /not/ assume the set of constraint-satisfying configurations is a > manifold or low-dimensional). > > Best, > > Mark > > > >> On Aug 4, 2021, at 7:31 AM, Jonathan Gammell <ga...@ro... >> <mailto:ga...@ro...>> wrote: >> >> Hi Adam, >> >> This is not a complete solution, but an engineering solution to your >> problem would be to generate a sufficient number of goal arm >> configurations and then use a planning algorithm capable of solving >> single-start, multiple-goal problems to find the optimal path to that >> discrete set of goals. >> >> I don't know all the algorithms that would be appropriate, but I do >> know that BIT* (and similar algorithms, like AIT*) can handle >> multiple goals (and also multiple starts). >> >> Hope that helps, >> Jon >> >> On 04/08/2021 14:32, omp...@li... >> <mailto:omp...@li...> wrote: >>> Send ompl-users mailing list submissions to >>> omp...@li... >>> <mailto:omp...@li...> >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> https://lists.sourceforge.net/lists/listinfo/ompl-users >>> or, via email, send a message with subject or body 'help' to >>> omp...@li... >>> >>> You can reach the person managing the list at >>> omp...@li... >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of ompl-users digest..." >>> >>> >>> Today's Topics: >>> >>> 1. Workspace Planning (Adam Fishman) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Tue, 3 Aug 2021 11:44:10 -0700 >>> From: Adam Fishman <afi...@cs...> >>> To: omp...@li... >>> Subject: [ompl-users] Workspace Planning >>> Message-ID: <949...@cs...> >>> Content-Type: text/plain; charset="utf-8" >>> >>> Hi all, >>> >>> I am trying to do optimal workspace planning with OMPL for a robotic >>> manipulator, but I haven?t found many resources on how to do this. >>> FWIW I am using the direct Python bindings for OMPL due to my system >>> constraints (I can?t use ROS for my task). >>> >>> The simplest way to do it seems to be to use IK to get a target >>> configuration and then to use something like BIT* to plan. The issue >>> with this, however, is that the IK solution may not itself be >>> optimal. That is, the optimal path to one IK solution may be shorter >>> than the optimal path to another. And, finding the ideal IK solution >>> seems to be a planning problem itself. >>> >>> I?ve been looking into Constrained Planning, but the workspace is >>> not exactly a zero-volume constraint manifold, so I?m not sure how >>> to fit this into the framework. >>> >>> I imagine this is a problem that?s been explored extensively, but >>> I?m not sure how to proceed. Does anyone have suggestions? >>> >>> Thanks a lot, >>> Adam Fishman >>> He/Him >>> fishbotics.com <http://fishbotics.com/> >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> >>> ------------------------------ >>> >>> >>> >>> ------------------------------ >>> >>> Subject: Digest Footer >>> >>> _______________________________________________ >>> ompl-users mailing list >>> omp...@li... >>> https://lists.sourceforge.net/lists/listinfo/ompl-users >>> >>> >>> ------------------------------ >>> >>> End of ompl-users Digest, Vol 108, Issue 1 >>> ****************************************** >> >> -- >> Jonathan Gammell, Ph.D. >> >> Departmental Lecturer in Robotics >> Estimation, Search, and Planning (ESP) Research Group >> Oxford Robotics Institute (ORI) >> Department of Engineering Science >> University of Oxford >> office: +44 1865 613082 >> skype: jdgammell >> https://robotic-esp.com/gammell <https://robotic-esp.com/gammell> >> https://twitter.com/robotic_esp >> >> >> >> _______________________________________________ >> ompl-users mailing list >> omp...@li... >> https://lists.sourceforge.net/lists/listinfo/ompl-users > > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Mark M. <mm...@ri...> - 2021-08-04 15:40:11
|
Hi Adam, Expanding on Jonathan just wrote, you can combine his idea with the type of local optimization that is typically done in a post processing stage. There is a PathSimplifier::findBetterGoal <https://github.com/ompl/ompl/blob/main/src/ompl/geometric/PathSimplifier.h#L219> method that gets called by PathSimplifier::simplify to find a better IK solution that can be directed to the rest of the path. Here, “better” means the overall path cost (not just the cost of a state) as defined by the optimization objective (which is path length by default). For this to work, you need a goal of a type derived from GoalSampleableRegion <http://ompl.kavrakilab.org/classompl_1_1base_1_1GoalSampleableRegion.html>. The derived class GoalLazySamples might be a good start; all you need is a functor that returns IK solutions. Re. constrained planning, that often assumes the constraints apply to the whole path, not just the goal state, so this might not be what you want. (Side note: although it’s often convenient to reason lower-dimensional constraint manifolds, the ProjectedStateSpace <http://ompl.kavrakilab.org/classompl_1_1base_1_1ProjectedStateSpace.html> does not assume the set of constraint-satisfying configurations is a manifold or low-dimensional). Best, Mark > On Aug 4, 2021, at 7:31 AM, Jonathan Gammell <ga...@ro...> wrote: > > Hi Adam, > > This is not a complete solution, but an engineering solution to your problem would be to generate a sufficient number of goal arm configurations and then use a planning algorithm capable of solving single-start, multiple-goal problems to find the optimal path to that discrete set of goals. > > I don't know all the algorithms that would be appropriate, but I do know that BIT* (and similar algorithms, like AIT*) can handle multiple goals (and also multiple starts). > > Hope that helps, > Jon > > On 04/08/2021 14:32, omp...@li... wrote: >> Send ompl-users mailing list submissions to >> omp...@li... >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.sourceforge.net/lists/listinfo/ompl-users >> or, via email, send a message with subject or body 'help' to >> omp...@li... >> >> You can reach the person managing the list at >> omp...@li... >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of ompl-users digest..." >> >> >> Today's Topics: >> >> 1. Workspace Planning (Adam Fishman) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Tue, 3 Aug 2021 11:44:10 -0700 >> From: Adam Fishman <afi...@cs...> >> To: omp...@li... >> Subject: [ompl-users] Workspace Planning >> Message-ID: <949...@cs...> >> Content-Type: text/plain; charset="utf-8" >> >> Hi all, >> >> I am trying to do optimal workspace planning with OMPL for a robotic manipulator, but I haven?t found many resources on how to do this. FWIW I am using the direct Python bindings for OMPL due to my system constraints (I can?t use ROS for my task). >> >> The simplest way to do it seems to be to use IK to get a target configuration and then to use something like BIT* to plan. The issue with this, however, is that the IK solution may not itself be optimal. That is, the optimal path to one IK solution may be shorter than the optimal path to another. And, finding the ideal IK solution seems to be a planning problem itself. >> >> I?ve been looking into Constrained Planning, but the workspace is not exactly a zero-volume constraint manifold, so I?m not sure how to fit this into the framework. >> >> I imagine this is a problem that?s been explored extensively, but I?m not sure how to proceed. Does anyone have suggestions? >> >> Thanks a lot, >> Adam Fishman >> He/Him >> fishbotics.com <http://fishbotics.com/> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> >> ------------------------------ >> >> >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> ompl-users mailing list >> omp...@li... >> https://lists.sourceforge.net/lists/listinfo/ompl-users >> >> >> ------------------------------ >> >> End of ompl-users Digest, Vol 108, Issue 1 >> ****************************************** > > -- > Jonathan Gammell, Ph.D. > > Departmental Lecturer in Robotics > Estimation, Search, and Planning (ESP) Research Group > Oxford Robotics Institute (ORI) > Department of Engineering Science > University of Oxford > office: +44 1865 613082 > skype: jdgammell > https://robotic-esp.com/gammell > https://twitter.com/robotic_esp > > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Jonathan G. <ga...@ro...> - 2021-08-04 14:31:41
|
Hi Adam, This is not a complete solution, but an engineering solution to your problem would be to generate a sufficient number of goal arm configurations and then use a planning algorithm capable of solving single-start, multiple-goal problems to find the optimal path to that discrete set of goals. I don't know all the algorithms that would be appropriate, but I do know that BIT* (and similar algorithms, like AIT*) can handle multiple goals (and also multiple starts). Hope that helps, Jon On 04/08/2021 14:32, omp...@li... wrote: > Send ompl-users mailing list submissions to > omp...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/ompl-users > or, via email, send a message with subject or body 'help' to > omp...@li... > > You can reach the person managing the list at > omp...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of ompl-users digest..." > > > Today's Topics: > > 1. Workspace Planning (Adam Fishman) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 3 Aug 2021 11:44:10 -0700 > From: Adam Fishman <afi...@cs...> > To: omp...@li... > Subject: [ompl-users] Workspace Planning > Message-ID: <949...@cs...> > Content-Type: text/plain; charset="utf-8" > > Hi all, > > I am trying to do optimal workspace planning with OMPL for a robotic manipulator, but I haven?t found many resources on how to do this. FWIW I am using the direct Python bindings for OMPL due to my system constraints (I can?t use ROS for my task). > > The simplest way to do it seems to be to use IK to get a target configuration and then to use something like BIT* to plan. The issue with this, however, is that the IK solution may not itself be optimal. That is, the optimal path to one IK solution may be shorter than the optimal path to another. And, finding the ideal IK solution seems to be a planning problem itself. > > I?ve been looking into Constrained Planning, but the workspace is not exactly a zero-volume constraint manifold, so I?m not sure how to fit this into the framework. > > I imagine this is a problem that?s been explored extensively, but I?m not sure how to proceed. Does anyone have suggestions? > > Thanks a lot, > Adam Fishman > He/Him > fishbotics.com <http://fishbotics.com/> > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > ------------------------------ > > End of ompl-users Digest, Vol 108, Issue 1 > ****************************************** -- Jonathan Gammell, Ph.D. Departmental Lecturer in Robotics Estimation, Search, and Planning (ESP) Research Group Oxford Robotics Institute (ORI) Department of Engineering Science University of Oxford office: +44 1865 613082 skype: jdgammell https://robotic-esp.com/gammell https://twitter.com/robotic_esp |
From: Adam F. <afi...@cs...> - 2021-08-03 19:10:26
|
Hi all, I am trying to do optimal workspace planning with OMPL for a robotic manipulator, but I haven’t found many resources on how to do this. FWIW I am using the direct Python bindings for OMPL due to my system constraints (I can’t use ROS for my task). The simplest way to do it seems to be to use IK to get a target configuration and then to use something like BIT* to plan. The issue with this, however, is that the IK solution may not itself be optimal. That is, the optimal path to one IK solution may be shorter than the optimal path to another. And, finding the ideal IK solution seems to be a planning problem itself. I’ve been looking into Constrained Planning, but the workspace is not exactly a zero-volume constraint manifold, so I’m not sure how to fit this into the framework. I imagine this is a problem that’s been explored extensively, but I’m not sure how to proceed. Does anyone have suggestions? Thanks a lot, Adam Fishman He/Him fishbotics.com <http://fishbotics.com/> |
From: Evelien H. <E.J...@st...> - 2021-07-06 14:53:22
|
Hi Hadar, I did not run into this exception. However, I do have an example of the use of the PlannerDataStorage for storing and loading a graph: Store data: data->decoupleFromPlanner(); // necessary for doing anything with plannerdata const char *filename = "plannerdata"; ompl::base::PlannerDataStorage datastorage; datastorage.store(*data, filename); Load data: ompl::base::PlannerDataStorage datastorage; const char *filename = "plannerdata"; datastorage.load(filename, *data); Where data is: ompl::base::PlannerDataPtr Good luck! Best, Evelien ________________________________ Van: hadar sinvani <had...@gm...> Verzonden: dinsdag 6 juli 2021 16:19:38 Aan: omp...@li... Onderwerp: [ompl-users] Planner Data Storage- exception thrown Hi I am working with prm and real vector state space and I'm trying to store the graph by using the Planner Data Storage. when calling PlannerDataStorage.store(PlannerData, filename), the following code line inside PlannerDataStorage.store(PlannerData, std::ostream): boost::archive::binary_oarchive oa(out); throws an exception of access violation reading location (msvcp140.dll). Does anyone have any idea why? Also, is there a demo with an example of how to use Planner Data Storage for storing/loading graph? Best, Hadar. |
From: hadar s. <had...@gm...> - 2021-07-06 14:20:00
|
Hi I am working with prm and real vector state space and I'm trying to store the graph by using the Planner Data Storage. when calling PlannerDataStorage.store(PlannerData, filename), the following code line inside PlannerDataStorage.store(PlannerData, std::ostream): boost::archive::binary_oarchive oa(out); throws an exception of access violation reading location (msvcp140.dll). Does anyone have any idea why? Also, is there a demo with an example of how to use Planner Data Storage for storing/loading graph? Best, Hadar. |
From: David B. <boo...@gm...> - 2021-05-06 10:07:36
|
Hi, When running 'make' for building ompl, getting the following error: *[91%] Linking CXX executable ../bin/demo_AnytimePathShortening* *..lib/libompl_app_base.so.1.5.2: undefined reference to 'fcl::Triangle::operator[ ](int) const* *../lib/libompl_app_base.so.1.5.2: undefined reference to 'unsigned long fcl::collide<double>(fcl::CollisionGeometry<double> const*, Eigen::Transform<double, 3, 1, 0> const&, fcl::CollisionGeometry<double> const*, Eigen::Transform<double, 3, 1, 0> const& fcl::CollisionRequest<double> const&, fcl::CollisionResult<double>&)'* I have already 'make installed' FCL latest libraries. How to resolve this? Thanks, Boon |
From: Mark M. <mm...@ri...> - 2021-04-08 17:30:50
|
Hi Trifena, Your ValidityChecker::isValid function will be called a lot by OMPL’s planners. It looks like this method is doing a lot of initialization work that could be moved to the constructor. I suspect that this method is currently just really slow. If you call solve() with a much larger time threshold, you should see that it was able to generate more than just 1 state. If not, then there’s something else going wrong. Best, Mark > On Apr 8, 2021, at 12:35 AM, Trifena Damaris <tri...@gm...> wrote: > > > Dear OMPL developper, > > Hi. I just started using OMPL a couple days ago, so I am still trying to understand it. > I am trying to use OMPL RRTConnect planner. The robot that i use is a 6-DOF puma arm robot. > I consider a 6DOF joint space and I used the real vector space with bounds, > and in the state validity checker I check the collision for the chosen path. > > I set the start point, goal point in joint angel for every joint(in radians) -> I'm not sure whether i use this correctly. > When I try to run the code, it says that no solution is found. Is it because I declare the start and goal point wrongly? > > I am attaching the result and the code. > > Any help is appreciated! Thank you! > > Here is the code, I modified it from Optimal Planning Tutorial: > > #include <ompl/base/SpaceInformation.h> > #include <ompl/base/spaces/SE3StateSpace.h> > #include <ompl/geometric/planners/rrt/RRTConnect.h> > #include <ompl/geometric/SimpleSetup.h> > #include <ompl/geometric/planners/rrt/RRT.h> > #include <ompl/base/objectives/PathLengthOptimizationObjective.h> > #include <ompl/base/Planner.h> > #include <ompl/base/ProblemDefinition.h> > #include <ompl/base/PlannerStatus.h> > > #include <ompl/config.h> > #include <iostream> > #include <cmath> > #include <fstream> > > namespace ob = ompl::base; > namespace og = ompl::geometric; > > class ValidityChecker : public ob::StateValidityChecker > { > public: > ValidityChecker(const ob::SpaceInformationPtr &si) : ob::StateValidityChecker(si) {} > > bool isValid(const ob::State *state) const > { > const ob::RealVectorStateSpace::StateType *state_vec = state->as<ob::RealVectorStateSpace::StateType>(); > > RobotCollisionDetector robot_collision_detector; > Vector6d joint_6d; > DepthImageGenerator depth_image_generator; > MapGenerator map_generator; > bool collision; > > PointCloudXYZRGB::Ptr obstacle_point_cloud; > obstacle_point_cloud.reset(new PointCloudXYZRGB); > > //read image files > //home > cv::Mat rgb_image_home_main = cv::imread("0211_103231_572_main.png"); > cv::Mat rgb_image_home_sub = cv::imread("0211_103231_701_sub.png"); > > //generate depth image > cv::Mat depth_image_home = depth_image_generator.getDepthImage(rgb_image_home_main, rgb_image_home_sub); > > Pose pose_home; > utility::setPose(&pose_home, 0.23573, -0.294639, 0.229634, 1.83002, -0.0000699457, 0.0000792287); > > map_generator.updateRGBMap(rgb_image_home_main, depth_image_home, pose_home); > > map_generator.getRGBObstacleMap(obstacle_point_cloud); > > for (int i = 0; i < 6; ++i) > { > std::cout << "joint:" << state_vec->values[i] << endl; > joint_6d[i] = state_vec->values[i]; > } > > robot_collision_detector.checkCollision(joint_6d, obstacle_point_cloud, &collision); > if (collision == true) > { > std::cout << "collision detected" << std::endl; > return false; > } > > return true; > } > }; > > ob::OptimizationObjectivePtr getPathLengthObjective(const ob::SpaceInformationPtr &si) > { > > ob::OptimizationObjectivePtr obj(new ob::PathLengthOptimizationObjective(si)); > obj->setCostThreshold(ob::Cost(M_PI)); > return obj; > } > > void armPlanWithRRT() > { > // Construct the state space where we are planning > ob::StateSpacePtr space(new ob::RealVectorStateSpace(6)); > > ob::RealVectorBounds bounds(6); > // for (int i = 0; i < 6; ++i) > // { > // bounds.setLow(i, -M_PI); > // bounds.setHigh(i, M_PI); > // } > > bounds.setLow(0, -3.05); bounds.setHigh(0, 3.05); > bounds.setLow(1, -3.05); bounds.setHigh(1, 3.05); > bounds.setLow(2, -2.79); bounds.setHigh(2, 2.79); > bounds.setLow(3, -3.05); bounds.setHigh(3, 3.05); > bounds.setLow(4, -2.40); bounds.setHigh(4, 2.40); > bounds.setLow(5, -3.05); bounds.setHigh(5, 3.05); > > > space->as<ob::RealVectorStateSpace>()->setBounds(bounds); > > // Construct a space information instance for this state space > ob::SpaceInformationPtr si(new ob::SpaceInformation(space)); > > // Set the object used to check which states in the space are valid > si->setStateValidityChecker(ob::StateValidityCheckerPtr(new ValidityChecker(si))); > si->setup(); > > // Setup Start and Goal > ob::ScopedState<ob::RealVectorStateSpace> start(space); > start->as<ob::RealVectorStateSpace::StateType>()->values[0] = -0.777708; > start->as<ob::RealVectorStateSpace::StateType>()->values[1] = -0.26842; > start->as<ob::RealVectorStateSpace::StateType>()->values[2] = -1.80781; > start->as<ob::RealVectorStateSpace::StateType>()->values[3] = 2.02778; > start->as<ob::RealVectorStateSpace::StateType>()->values[4] = 2.24392; > start->as<ob::RealVectorStateSpace::StateType>()->values[5] = 2.2386; > start.print(std::cout); > > ob::ScopedState<ob::RealVectorStateSpace> goal(space); > goal->as<ob::RealVectorStateSpace::StateType>()->values[0] = -2.01023; > goal->as<ob::RealVectorStateSpace::StateType>()->values[1] = -0.31717; > goal->as<ob::RealVectorStateSpace::StateType>()->values[2] = -1.62667; > goal->as<ob::RealVectorStateSpace::StateType>()->values[3] = 1.40112; > goal->as<ob::RealVectorStateSpace::StateType>()->values[4] = 1.16346; > goal->as<ob::RealVectorStateSpace::StateType>()->values[5] = 1.97896; > goal.print(std::cout); > > // Create a problem instance > ob::ProblemDefinitionPtr pdef(new ob::ProblemDefinition(si)); > > // Set the start and goal states > pdef->setStartAndGoalStates(start, goal); > > pdef->setOptimizationObjective(getPathLengthObjective(si)); > > // Construct our optimizing planner using the RRTstar algorithm. > ob::PlannerPtr optimizingPlanner(new ompl::geometric::RRTConnect(si)); > > // Set the problem instance for our planner to solve > optimizingPlanner->setProblemDefinition(pdef); > optimizingPlanner->setup(); > > ob::PlannerStatus solved = optimizingPlanner->solve(1.0); > > if (solved) > { > > std::cout << "Found solution:" << std::endl; > pdef->getSolutionPath()->print(std::cout); > } > else > { > std::cout << "No solution found" << std::endl; > } > > optimizingPlanner->clear(); > } > > int main(int /*argc*/, char ** /*argv*/) > { > > std::cout << "*********************planRRT*****************" << std::endl > << std::endl; > > armPlanWithRRT(); > > return 0; > } > > > > The result when i tried to run the code: > *********************planRRT***************** > > RealVectorState [-0.777708 -0.26842 -1.80781 2.02778 2.24392 2.2386] > RealVectorState [-2.01023 -0.31717 -1.62667 1.40112 1.16346 1.97896] > Debug: RRTConnect: Planner range detected to be 2.849676 > [MAP GENERATOR] obstacle_cloud_ size_after_update: 7987 > > joint:-0.777708 > joint:-0.26842 > joint:-1.80781 > joint:2.02778 > joint:2.24392 > joint:2.2386 > > no collision detected > Info: RRTConnect: Starting planning with 1 states already in datastructure > Info: RRTConnect: Created 1 states (1 start + 0 goal) > No solution found > > > > > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |
From: Sundas M. <sun...@gm...> - 2021-04-08 07:53:01
|
Hello, I hope you're doing well. I'm working on my R&D project and I have some implementation issues when adding the planner. I added header and cpp file in geometric planner. I just copy the rrtstar header and cpp code and paste it in new file named as "rrtstarx". To check that planner is added to not. Then I add header file in gemetric.txt after rrtstar header file. After that I go in build directory and in release I open a terminal and run the make clean_bindings and then make -j 3 update_bindings. But after updating when I open gui the new planner is not added there. Can you help me with that? Just need instructions. I need your serious help to guide me so that I can add new planner in ompl. I'm waiting for your positive response. Regards!!! |
From: Trifena D. <tri...@gm...> - 2021-04-08 07:35:32
|
Dear OMPL developper, Hi. I just started using OMPL a couple days ago, so I am still trying to understand it. I am trying to use OMPL RRTConnect planner. The robot that i use is a 6-DOF puma arm robot. I consider a 6DOF joint space and I used the real vector space with bounds, and in the state validity checker I check the collision for the chosen path. I set the start point, goal point in joint angel for every joint(in radians) -> I'm not sure whether i use this correctly. When I try to run the code, it says that no solution is found. Is it because I declare the start and goal point wrongly? I am attaching the result and the code. Any help is appreciated! Thank you! Here is the code, I modified it from Optimal Planning Tutorial: #include <ompl/base/SpaceInformation.h> #include <ompl/base/spaces/SE3StateSpace.h> #include <ompl/geometric/planners/rrt/RRTConnect.h> #include <ompl/geometric/SimpleSetup.h> #include <ompl/geometric/planners/rrt/RRT.h> #include <ompl/base/objectives/PathLengthOptimizationObjective.h> #include <ompl/base/Planner.h> #include <ompl/base/ProblemDefinition.h> #include <ompl/base/PlannerStatus.h> #include <ompl/config.h> #include <iostream> #include <cmath> #include <fstream> namespace ob = ompl::base; namespace og = ompl::geometric; class ValidityChecker : public ob::StateValidityChecker { public: ValidityChecker(const ob::SpaceInformationPtr &si) : ob::StateValidityChecker(si) {} bool isValid(const ob::State *state) const { const ob::RealVectorStateSpace::StateType *state_vec = state->as<ob::RealVectorStateSpace::StateType>(); RobotCollisionDetector robot_collision_detector; Vector6d joint_6d; DepthImageGenerator depth_image_generator; MapGenerator map_generator; bool collision; PointCloudXYZRGB::Ptr obstacle_point_cloud; obstacle_point_cloud.reset(new PointCloudXYZRGB); //read image files //home cv::Mat rgb_image_home_main = cv::imread("0211_103231_572_main.png"); cv::Mat rgb_image_home_sub = cv::imread("0211_103231_701_sub.png"); //generate depth image cv::Mat depth_image_home = depth_image_generator.getDepthImage(rgb_image_home_main, rgb_image_home_sub); Pose pose_home; utility::setPose(&pose_home, 0.23573, -0.294639, 0.229634, 1.83002, -0.0000699457, 0.0000792287); map_generator.updateRGBMap(rgb_image_home_main, depth_image_home, pose_home); map_generator.getRGBObstacleMap(obstacle_point_cloud); for (int i = 0; i < 6; ++i) { std::cout << "joint:" << state_vec->values[i] << endl; joint_6d[i] = state_vec->values[i]; } robot_collision_detector.checkCollision(joint_6d, obstacle_point_cloud, &collision); if (collision == true) { std::cout << "collision detected" << std::endl; return false; } return true; } }; ob::OptimizationObjectivePtr getPathLengthObjective(const ob::SpaceInformationPtr &si) { ob::OptimizationObjectivePtr obj(new ob::PathLengthOptimizationObjective(si)); obj->setCostThreshold(ob::Cost(M_PI)); return obj; } void armPlanWithRRT() { // Construct the state space where we are planning ob::StateSpacePtr space(new ob::RealVectorStateSpace(6)); ob::RealVectorBounds bounds(6); // for (int i = 0; i < 6; ++i) // { // bounds.setLow(i, -M_PI); // bounds.setHigh(i, M_PI); // } bounds.setLow(0, -3.05); bounds.setHigh(0, 3.05); bounds.setLow(1, -3.05); bounds.setHigh(1, 3.05); bounds.setLow(2, -2.79); bounds.setHigh(2, 2.79); bounds.setLow(3, -3.05); bounds.setHigh(3, 3.05); bounds.setLow(4, -2.40); bounds.setHigh(4, 2.40); bounds.setLow(5, -3.05); bounds.setHigh(5, 3.05); space->as<ob::RealVectorStateSpace>()->setBounds(bounds); // Construct a space information instance for this state space ob::SpaceInformationPtr si(new ob::SpaceInformation(space)); // Set the object used to check which states in the space are valid si->setStateValidityChecker(ob::StateValidityCheckerPtr(new ValidityChecker(si))); si->setup(); // Setup Start and Goal ob::ScopedState<ob::RealVectorStateSpace> start(space); start->as<ob::RealVectorStateSpace::StateType>()->values[0] = -0.777708; start->as<ob::RealVectorStateSpace::StateType>()->values[1] = -0.26842; start->as<ob::RealVectorStateSpace::StateType>()->values[2] = -1.80781; start->as<ob::RealVectorStateSpace::StateType>()->values[3] = 2.02778; start->as<ob::RealVectorStateSpace::StateType>()->values[4] = 2.24392; start->as<ob::RealVectorStateSpace::StateType>()->values[5] = 2.2386; start.print(std::cout); ob::ScopedState<ob::RealVectorStateSpace> goal(space); goal->as<ob::RealVectorStateSpace::StateType>()->values[0] = -2.01023; goal->as<ob::RealVectorStateSpace::StateType>()->values[1] = -0.31717; goal->as<ob::RealVectorStateSpace::StateType>()->values[2] = -1.62667; goal->as<ob::RealVectorStateSpace::StateType>()->values[3] = 1.40112; goal->as<ob::RealVectorStateSpace::StateType>()->values[4] = 1.16346; goal->as<ob::RealVectorStateSpace::StateType>()->values[5] = 1.97896; goal.print(std::cout); // Create a problem instance ob::ProblemDefinitionPtr pdef(new ob::ProblemDefinition(si)); // Set the start and goal states pdef->setStartAndGoalStates(start, goal); pdef->setOptimizationObjective(getPathLengthObjective(si)); // Construct our optimizing planner using the RRTstar algorithm. ob::PlannerPtr optimizingPlanner(new ompl::geometric::RRTConnect(si)); // Set the problem instance for our planner to solve optimizingPlanner->setProblemDefinition(pdef); optimizingPlanner->setup(); ob::PlannerStatus solved = optimizingPlanner->solve(1.0); if (solved) { std::cout << "Found solution:" << std::endl; pdef->getSolutionPath()->print(std::cout); } else { std::cout << "No solution found" << std::endl; } optimizingPlanner->clear(); } int main(int /*argc*/, char ** /*argv*/) { std::cout << "*********************planRRT*****************" << std::endl << std::endl; armPlanWithRRT(); return 0; } The result when i tried to run the code: *********************planRRT***************** RealVectorState [-0.777708 -0.26842 -1.80781 2.02778 2.24392 2.2386] RealVectorState [-2.01023 -0.31717 -1.62667 1.40112 1.16346 1.97896] Debug: RRTConnect: Planner range detected to be 2.849676 [MAP GENERATOR] obstacle_cloud_ size_after_update: 7987 joint:-0.777708 joint:-0.26842 joint:-1.80781 joint:2.02778 joint:2.24392 joint:2.2386 no collision detected Info: RRTConnect: Starting planning with 1 states already in datastructure Info: RRTConnect: Created 1 states (1 start + 0 goal) No solution found |
From: Mark M. <mm...@ri...> - 2021-04-07 00:47:11
|
Computing time-optimal trajectories is currently not supported in OMPL. You could use path length as a proxy for path duration. Within MoveIt you can use the PlanningAdapter concept to generate a time-optimal trajectory (using, e.g., TOTG) <http://moveit2_tutorials.picknik.ai/doc/time_parameterization/time_parameterization_tutorial.html> for a given path produced by OMPL. This won’t change the path, but just add timestamps s.t. joint velocity and acceleration limits are respected. Best, Mark > On Apr 6, 2021, at 8:05 AM, Federico Insero <ins...@li...> wrote: > > Hi Developers and Users, > > I am a MSc. student of Mechanical Engineering and I am working with ROS for a thesis project on my own and I would like to use the RRT for planning my robot (which is a redundant one). I need to implement a time optimization cost function for moving the robot with MoveIt package and the RRT planner. How can i do that? I have seen the tutorials, particularly: https://ompl.kavrakilab.org/optimizationObjectivesTutorial.html <https://ompl.kavrakilab.org/optimizationObjectivesTutorial.html> . But I am still confused on how can I implement. > > Thank you so much for the help, > > Federico > Optimization Objectives Tutorial - OMPL <https://ompl.kavrakilab.org/optimizationObjectivesTutorial.html> > The above code fragment creates and optimization objective which attempts to optimize both path length and clearance. We begin by defining each of the individual objectives, and then we add them to a ompl::base::MultiOptimizationObjective object. This results in an optimization objective where path cost is equivalent to summing up each of the individual objectives' path costs. > ompl.kavrakilab.org <http://ompl.kavrakilab.org/> > _______________________________________________ > ompl-users mailing list > omp...@li... <mailto:omp...@li...> > https://lists.sourceforge.net/lists/listinfo/ompl-users <https://lists.sourceforge.net/lists/listinfo/ompl-users> |
From: Federico I. <ins...@li...> - 2021-04-06 15:38:39
|
Hi Developers and Users, I am a MSc. student of Mechanical Engineering and I am working with ROS for a thesis project on my own and I would like to use the RRT for planning my robot (which is a redundant one). I need to implement a time optimization cost function for moving the robot with MoveIt package and the RRT planner. How can i do that? I have seen the tutorials, particularly: https://ompl.kavrakilab.org/optimizationObjectivesTutorial.html . But I am still confused on how can I implement. Thank you so much for the help, Federico Optimization Objectives Tutorial - OMPL<https://ompl.kavrakilab.org/optimizationObjectivesTutorial.html> The above code fragment creates and optimization objective which attempts to optimize both path length and clearance. We begin by defining each of the individual objectives, and then we add them to a ompl::base::MultiOptimizationObjective object. This results in an optimization objective where path cost is equivalent to summing up each of the individual objectives' path costs. ompl.kavrakilab.org |
From: Peter M. <mit...@gm...> - 2021-03-11 15:26:45
|
Thanks, this is great news! Now if I may tack on another issue I've found with the bindings. The following gives an error only when isSatisfied is overridden https://gist.github.com/PeterMitrano/1e6234f0f200e9adba0428e1a35fc6fb I assume the problem is in the * double argument ? Thanks in advance! On Wed, Mar 10, 2021 at 5:22 PM Mark Moll <mm...@ri...> wrote: > Hi Peter, > > There was a bug in the Python binding generation code that I just fixed > <https://github.com/ompl/ompl/commit/1ce20e2940f85c6931b673627eb6923be1fb1943>. > After you update ompl, regenerate the bindings and recompile, something > like this should work: > > planner_data = oc.PlannerData(si) > planner.getPlannerData(planner_data) > edgeMap = ob.mapUintToPlannerDataEdge() > for i in range(planner_data.numVertices()): > vi = planner_data.vertexIndex(planner_data.getVertex(i)) > planner_data.getEdges(vi, edgeMap) > for index, control in edgeMap: > print(vi, '->', index, control.getDuration(), > control.getControl()[0], control.getControl()[1]) > > This will print out the control duration and control first two components > of the control input vector for each edge. > > Best, > > Mark > > > > On Feb 11, 2021, at 1:32 PM, Peter Mitrano <mit...@gm...> wrote: > > I'm having trouble getting control info in python. I'm using a oc.RRT and > getting the planner data by: > > planner_data = ob.PlannerData(si) > rrt.getPlannerData(planner_data) > > which works, but from there I cannot figure out how to get the controls. I > know I should call getControls() on a control::PlannerDataEdgeControl but I > can't figure out how to get that. Also, hasControls returns False. I can > also see that the bindings for control::PlannerData and > control::PlannerDataEdgeControl are not being generated, not sure why > though, since control::PlannerDataEdgeStorage is > > Might I get help on making the appropriate changes to get this work? > Thanks! > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users > > > |
From: Mark M. <mm...@ri...> - 2021-03-10 22:22:35
|
Hi Peter, There was a bug in the Python binding generation code that I just fixed <https://github.com/ompl/ompl/commit/1ce20e2940f85c6931b673627eb6923be1fb1943>. After you update ompl, regenerate the bindings and recompile, something like this should work: planner_data = oc.PlannerData(si) planner.getPlannerData(planner_data) edgeMap = ob.mapUintToPlannerDataEdge() for i in range(planner_data.numVertices()): vi = planner_data.vertexIndex(planner_data.getVertex(i)) planner_data.getEdges(vi, edgeMap) for index, control in edgeMap: print(vi, '->', index, control.getDuration(), control.getControl()[0], control.getControl()[1]) This will print out the control duration and control first two components of the control input vector for each edge. Best, Mark > On Feb 11, 2021, at 1:32 PM, Peter Mitrano <mit...@gm...> wrote: > > I'm having trouble getting control info in python. I'm using a oc.RRT and getting the planner data by: > > planner_data = ob.PlannerData(si) > rrt.getPlannerData(planner_data) > > which works, but from there I cannot figure out how to get the controls. I know I should call getControls() on a control::PlannerDataEdgeControl but I can't figure out how to get that. Also, hasControls returns False. I can also see that the bindings for control::PlannerData and control::PlannerDataEdgeControl are not being generated, not sure why though, since control::PlannerDataEdgeStorage is > > Might I get help on making the appropriate changes to get this work? Thanks! > _______________________________________________ > ompl-users mailing list > omp...@li... > https://lists.sourceforge.net/lists/listinfo/ompl-users |