This list is closed, nobody may subscribe to it.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(6) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(8) |
Dec
(5) |
2009 |
Jan
(5) |
Feb
(1) |
Mar
(3) |
Apr
(4) |
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
(8) |
Oct
|
Nov
(1) |
Dec
(13) |
2010 |
Jan
|
Feb
(6) |
Mar
(4) |
Apr
(1) |
May
(10) |
Jun
(43) |
Jul
(37) |
Aug
(3) |
Sep
(6) |
Oct
(26) |
Nov
(17) |
Dec
(29) |
2011 |
Jan
(28) |
Feb
(18) |
Mar
(42) |
Apr
(18) |
May
(13) |
Jun
(32) |
Jul
(32) |
Aug
(25) |
Sep
(46) |
Oct
(41) |
Nov
(36) |
Dec
(43) |
2012 |
Jan
(92) |
Feb
(120) |
Mar
(40) |
Apr
(75) |
May
(40) |
Jun
(93) |
Jul
(115) |
Aug
(67) |
Sep
(38) |
Oct
(92) |
Nov
(95) |
Dec
(47) |
2013 |
Jan
(171) |
Feb
(200) |
Mar
(100) |
Apr
(134) |
May
(112) |
Jun
(142) |
Jul
(123) |
Aug
(66) |
Sep
(175) |
Oct
(236) |
Nov
(141) |
Dec
(98) |
2014 |
Jan
(91) |
Feb
(88) |
Mar
(126) |
Apr
(63) |
May
(123) |
Jun
(122) |
Jul
(105) |
Aug
(83) |
Sep
(114) |
Oct
(90) |
Nov
(181) |
Dec
(85) |
2015 |
Jan
(111) |
Feb
(120) |
Mar
(161) |
Apr
(95) |
May
(93) |
Jun
(185) |
Jul
(170) |
Aug
(119) |
Sep
(128) |
Oct
(110) |
Nov
(145) |
Dec
(92) |
2016 |
Jan
(105) |
Feb
(106) |
Mar
(101) |
Apr
(59) |
May
(96) |
Jun
(168) |
Jul
(110) |
Aug
(183) |
Sep
(85) |
Oct
(79) |
Nov
(87) |
Dec
(86) |
2017 |
Jan
(100) |
Feb
(77) |
Mar
(85) |
Apr
(52) |
May
(60) |
Jun
(63) |
Jul
(67) |
Aug
(24) |
Sep
(1) |
Oct
|
Nov
(2) |
Dec
|
From: Aaron D. <aar...@gm...> - 2010-09-16 13:33:22
|
Dear Masanobu, Thanks for your question. Today I am writing to ask about adequate parameters of Overshooting like " > overshoot_f_..." and "mass_for_overshoot_...". I'm using version 2258 of > MESA, but I don't know and couldn't find the parameters I should use. Would > you tell me generally adequate parameters for Overshooting? > My first suggestion would be that you read the MESA "instrument paper" that recently appeared on astro-ph! http://arxiv.org/abs/1009.1622 It contains a description of the overshoot mixing in MESA/star and plenty of references on how the method is implemented and what values of the overshoot_f_* are recommended for different boundaries (and evolutionary phases). > Moreover, I want to use same treatment for Overshooting with Girardi. > If you know the parameters to do so, would you tell me also those? > There isn't a simple answer to this question. MESA/star treats overshoot mixing in a very different manner from the Padova group in a couple of ways: (1) MESA/star treats convective mixing as a time-dependent, diffusive process and (2) the overshoot diffusion coefficient is modeled as an exponential decay of the diffusion coefficient at the convective boundary. Many (but not all) other codes treat convective regions as instantaneously mixed and overshoot as an "all or nothing" approach, where the extent of the overshoot region is fully mixed with the adjoining convective region. Best wishes, Aaron |
From: Masanobu K. <kun...@ge...> - 2010-09-16 12:28:58
|
Dear all, Hello, I'm Masanobu Kunitomo, a graduate student at Tokyo Institute of Technology. Today I am writing to ask about adequate parameters of Overshooting like " overshoot_f_..." and "mass_for_overshoot_...". I'm using version 2258 of MESA, but I don't know and couldn't find the parameters I should use. Would you tell me generally adequate parameters for Overshooting? Moreover, I want to use same treatment for Overshooting with Girardi. If you know the parameters to do so, would you tell me also those? Sincerely, Masanobu Kunitomo ---------------------------------------------- Masanobu Kunitomo graduate student Department of Earth and Planetary Sciences Tokyo Institute of Technology Email: kun...@ge... ---------------------------------------------- |
From: Bill P. <pa...@ki...> - 2010-09-10 00:26:22
|
Many of you likely knew this was in the works, but if not, then please have a look: [1] arXiv:1009.1622 [pdf, other] Title: Modules for Experiments in Stellar Astrophysics (MESA) Authors: Bill Paxton, Lars Bildsten, Aaron Dotter, Falk Herwig, Pierre Lesaffre, Frank Timmes Comments: 110 pages, 39 figures; submitted to ApJS; visit the MESA website at this http URL Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM) |
From: Bill P. <pa...@ki...> - 2010-09-07 15:42:42
|
On Sep 6, 2010, at 2:58 AM, xianfei zhang wrote: > Dear Bill, > I am trying to use MESA on some special stages of stellar evolution. > I want to heat the envelope of the star or to add some high temperature/entropy matters on it surface. > I don't know each packages of MESA in detail, could you please tell me which variable can used for this or do you have an introduction for all variables? > Dose 'extra_power_source' or 'T_function1' works for this? How to use it? (I just want to heat part of star not whole star) > Thank you very much. > Kind regards, > Xianfei Hi, Please install the current release, which as of 5 minutes ago, is 2647. In response to your request, there are some new controls concerning extra heating. These can be set in the &star_job section of your inlist file. relax_eps_extra = .false. ! if true, add an extra power source relax_eps_extra_min_steps = 0 ! use at least this many steps to change the parameters ! extra power is added according to the relative mass coordinate q new_qlow_for_extra_eps = 0 ! min q for extra eps new_qpeak_for_extra_eps = 0 ! q between min and max new_qhigh_for_extra_eps = 0 ! max q for extra eps new_extra_eps_at_qlow = 0 ! power at qlow as ergs/g/s new_extra_eps_at_qpeak = 0 ! power at qpeak new_extra_eps_at_qhigh = 0 ! power at qhigh ! interpolate extra_eps linearly in q between these 3 points If the code has trouble converging after you have changed eps_extra, you may need to turn it on at part strength and let the code run for ten or twenty steps to stabilize before repeating the process until you reach full strength. Cheers, Bill |
From: Ehsan M. <mor...@ia...> - 2010-08-11 00:23:47
|
<p>Thanks Aaron,</p><p>You were correct about the path. Of course I checked different combinations, but all of them failed. That was one...</p><p>Yeah, I would be really interested to know if others could solve this.</p><p>Cheers.</p> |
From: Aaron D. <aar...@gm...> - 2010-08-11 00:18:55
|
Hi Ehsan, I haven't had this same problem but I believe it is a good idea to compile the PGPLOT library (which is now distributed with MESA) using the same compiler as you use to compile MESA. Otherwise there can be some compatibility issues. I also noticed there is some disagreement between where you PGPLOT is installed and where LOAD_PGPLOT is pointing to it: > pgplot5: /usr/lib/pgplot5 > LOAD_PGPLOT = -L/usr/local/pgplot -lpgplot -lX11 -lxcb -lXau -lXdmcp -lpng > -lz -L/usr/lib -lXext > Make sure you are pointing to the correct place where all of these libraries are installed. I hope this helps! Maybe some other users will share their experience, too. Best wishes, Aaron |
From: Ehsan M. <mor...@ia...> - 2010-08-11 00:09:31
|
<p> Hi all,</p><p>This is my first post to the users list, since I have always asked Bill and Aaron, and these nice people help with such a patience. Their support are appreciable to me.</p><p>I wished to invoke the pgstar capability in MESA, but had a problem with linking and compiling to one library. As an Ubuntu user, I install PGPLOT through these series of commands:</p><p>sudo apt-get install pgplot5<br />sudo apt-get install libpng12-dev libpng3-dev zlib1g-dev libx11-dev<br />sudo apt-get install build-essential automake checkinstall<br /></p><p>next, I check for the path to pgplot5<br />root@ehsan:/home/ehsan# whereis pgplot5<br />pgplot5: /usr/lib/pgplot5</p><p>and try to successfully make a plot. So, for sure, I have a working PGPLOT.</p><p>Now, to use it in MESA, I edit makefile_header as described by Bill as</p><p>USE_PGSTAR = YES<br />LOAD_PGPLOT = -L/usr/local/pgplot -lpgplot -lX11 -lxcb -lXau -lXdmcp -lpng -lz -L/usr/lib -lXext</p><p>However, I receive an error during the compile process regarding the Xext library. The error is:</p><p>/usr/bin/ld: cannot find -lXext<br />collect2: ld returned 1 exit status<br />make: *** [star] Error 1<br /><br />FAILED<br /><br /><br /><br />-- <br />Moravveji, Ehsan.<br />Ph.D student of Astrophysics.<br />Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS), GavaZang Road, <br />Zanjan 45137-66731, Iran.<br /><br />Office: (+98)241-415 2212<br />Fax: (+98)241-415 2104<br />http://iasbs.ac.ir/students/moravveji</p> |
From: Bill P. <pa...@ki...> - 2010-07-31 15:46:08
|
Hi Josh, Please try the new release, 2578 and let me know what you find. gradB is gone and brunt N^2 is calculated differently. Following Mike Montgomery, we now compute N^2 using the following formula: N^2 = (g/r)*( (1/Gamma1)*(dln p/dln r) - (dln \rho/d ln r) ) s% brunt_N2(k) has the value at the outer boundary of cell k. This is always calculated now, independent of alpha_semiconvection or thermo_haline_coeff. and there is no eval_gradB flag either. The actual calculation is in the routine do_brunt_N2_for_cell in star/private/micro.f It is using the "raw" profiles of P, Rho, and R without any smoothing, so you will find some small jitters in the N^2 profile. Here's an example from a WD, M=0.82 Msun. (the negative values at the surface are where N^2 < 0) |
From: Bill P. <pa...@ki...> - 2010-07-30 04:34:06
|
Hi Joshua, On Jul 29, 2010, at 7:33 PM, Joshua Shiode wrote: > Hi all, > > First off, this is an awesome suite of tools for evolving stars, and thanks to everyone involved! And thank you for the kind words -- it is great to have that kind of feedback. ; - ) > I've just recently started using MESA to evolve massive stars for pulsation analysis with ADIPLS. > I'd like to use the value of the Brunt computed within MESA, but I've found that the gradB term acts very strangely. > I notice the following in my trials: > > With eval_gradB = .true., I find that the code actually sets gradB=0 everywhere. I believe I tracked this down to the fact that alpha_semiconvection and thermo_haline_coeff = 0, their defaults. However... with eval_gradB = .false. (alpha_semi... and thermo_... still at their defaults), the value of the Brunt *does* include a contribution from gradB, which is orders of magnitude too large (only a few in some cases, as many as 40 in others). All other mixing parameters are at their default values. I've been trying to understand the origin of this problem myself, but have had little success so far (I'm very much a novice with fortran.) > > I am currently evolving an 80 solar mass star at solar metallicity, but I have done this for lower mass stars (10, 25, 30..) and seen the same results. > > Even more perplexing.. when I watch the evolution with pgstar, the convective and radiative zones look fine, but the output values in the log files (brunt_n2, gradB, and sign_brunt_n2) reflect the odd behavior documented above... So how do you calculate the gradB term from the model info? Unfortunately, the mesa/eos doesn't know about partials wrt mu, so we can't rely on that. > > Unrelated to all this, I was also wondering if there are any routines in the mesa suite that could be used to re-grid an output profile for pulsation calculations. Stellar evolution codes seem to have problems importing models for other sources, and I'm afraid mesa/star is no exception. So the best use of itfor this will be to create models with the mesh tuned for the needs of pulsation calculations. I'd be happy to help with that if you are interested -- just let me know where you want to get enhanced resolution. For the Brunt/gradB issue, it would be a great help if you could take a saved model from mesa/star and calculate gradB independently. Then I can use that to debug or reimplement the relevant routines in star. Thanks, Bill |
From: Joshua S. <jhs...@be...> - 2010-07-30 02:48:27
|
Hi all, First off, this is an awesome suite of tools for evolving stars, and thanks to everyone involved! I've just recently started using MESA to evolve massive stars for pulsation analysis with ADIPLS. I'd like to use the value of the Brunt computed within MESA, but I've found that the gradB term acts very strangely. I notice the following in my trials: With eval_gradB = .true., I find that the code actually sets gradB=0 everywhere. I believe I tracked this down to the fact that alpha_semiconvection and thermo_haline_coeff = 0, their defaults. However... with eval_gradB = .false. (alpha_semi... and thermo_... still at their defaults), the value of the Brunt *does* include a contribution from gradB, which is orders of magnitude too large (only a few in some cases, as many as 40 in others). All other mixing parameters are at their default values. I've been trying to understand the origin of this problem myself, but have had little success so far (I'm very much a novice with fortran.) I am currently evolving an 80 solar mass star at solar metallicity, but I have done this for lower mass stars (10, 25, 30..) and seen the same results. Even more perplexing.. when I watch the evolution with pgstar, the convective and radiative zones look fine, but the output values in the log files (brunt_n2, gradB, and sign_brunt_n2) reflect the odd behavior documented above... Unrelated to all this, I was also wondering if there are any routines in the mesa suite that could be used to re-grid an output profile for pulsation calculations. Thanks! Josh Shiode |
From: Bill P. <pa...@ki...> - 2010-07-21 04:23:07
|
Hi Tursun, When you run mesa, it loads a few libraries dynamically (i.e., the libraries are not part of the mesa executable). That works fine if you are running on the same machine you used to build mesa -- or if you are running on a cluster where the machine that will be running mesa has access to the same file system. However, I believe that with Xgrid your program is sent to another mac on the local net to be run. So for that to work with mesa, the other machine must have the necessary libraries available. In your case, your job cannot run because it has been sent to a machine that lacks the needed library (libgomp.1.dylib). Moreover, the problem is not limited to libraries. Even if you have them, you also need the mesa/data directory to be available from the machine where you are running. The simple solution is don't use Xgrid! If you really-really-really want to use Xgrid, then you probably need to have mesa installed on any machine you'll use. Mesa works fine on a mac, and it works fine on clusters of linux boxes. But Xgrid is a bad choice unless you own all the machines and can install mesa on them all. --Bill On Jul 20, 2010, at 9:11 PM, Tursun Ablekim wrote: > Hi: > > I am trying to run mesa codes on Xgrid cluster which has Mac OS, but it didn't start the job. Here are what it says: > > Library not loaded: /usr/local/lib/libgomp.1.dylib > Referenced from: /var/xgrid/agent/tasks/Rx7uEsVa/working/./star > Reason: image not found > > I, somehow, learned that the library libgomp is related to open MPI and Mac OS doesn't support it. Is there anyone have encountered this problem? I would be very appreciated for any kind of solution. > > Thanks, > Tursun > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first_______________________________________________ > mesa-users mailing list > mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa-users |
From: Tursun A. <tur...@ws...> - 2010-07-21 04:11:22
|
Hi: I am trying to run mesa codes on Xgrid cluster which has Mac OS, but it didn't start the job. Here are what it says: Library not loaded: /usr/local/lib/libgomp.1.dylib Referenced from: /var/xgrid/agent/tasks/Rx7uEsVa/working/./star Reason: image not found I, somehow, learned that the library libgomp is related to open MPI and Mac OS doesn't support it. Is there anyone have encountered this problem? I would be very appreciated for any kind of solution. Thanks, Tursun |
From: Bill P. <pa...@ki...> - 2010-07-20 18:44:03
|
Hi Max, On Jul 20, 2010, at 10:52 AM, Max Katz wrote: > I ran a 7 Msun model starting yesterday, with mesh_delta_coeff set to something like 0.2, and a wind scheme roughly the same as Bill's, and it got through about 53000 timesteps and to a mass of about 1.3 Msun by itself, before it ran into convergence issues. Thanks for the info. Next question is what was the star doing at the time the convergence problems started? Was there a supersonic ejection (check v_surf/csound_surf). Was there a region at the base of the convection zone with Pgas << Prad? Does this look like a legitimate physical event or does it look like something that the code is creating? As you know, not all convergence problems are the result of bugs in the code -- some are "bugs" in nature!!! --Bill |
From: Max K. <ka...@rp...> - 2010-07-20 17:52:50
|
I ran a 7 Msun model starting yesterday, with mesh_delta_coeff set to something like 0.2, and a wind scheme roughly the same as Bill's, and it got through about 53000 timesteps and to a mass of about 1.3 Msun by itself, before it ran into convergence issues. Max Katz Rensselaer Polytechnic Institute Physics, Class of 2011 On Tue, Jul 20, 2010 at 1:21 PM, Bill Paxton <pa...@ki...> wrote: > I'm CC'ing this to the mesa-users list because it is an interesting case, > and > it has a message for everyone -- sometimes when the code doesn't > do what you want it to do, there might be an interesting physical reason. > it isn't always a bug in the code (usually it is, but not always!) > > > Hi Eric, > > > Turning on velocities and supersonic winds has given some interesting > results! > > I restarted the run at step 12000 which was a bit before the surface > velocities went crazy before. > > Here's a plot of v_surf/csound_surf covering the period where the previous > run died. > > > when the outward velocity is supersonic, the supersonic wind routine kicks > in. > the effect is dramatic! look at the mass loss -- about 0.5 Msun ejected in > a supersonic wind! > > > however the timesteps have gotten small, so now the progress is slow. > > > > I'll see if I can find a way around that. however this profile shows > that we are getting into difficult territory -- we have supersonic infall > happening in the lower envelope at the same time we have nearly > supersonic outward velocities near the surface. a lot of the envelope > that we tried to eject is now falling back and running into the lower > part of the envelope. a train wreck in progress. if left to itself > the code will try to follow the gory details and that will take lots > and lots of tiny timesteps. if we don't care about the details, > we need to find a way to make the code ignore them. > but that needs to be done in a way that doesn't mess up > the results for the stuff we are actually interested in. > tricky business this stellar evolution! ; - ) > > > > > -Bill > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > mesa-users mailing list > mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa-users > > |
From: Bill P. <pa...@ki...> - 2010-07-20 17:21:22
|
I'm CC'ing this to the mesa-users list because it is an interesting case, and it has a message for everyone -- sometimes when the code doesn't do what you want it to do, there might be an interesting physical reason. it isn't always a bug in the code (usually it is, but not always!) Hi Eric, Turning on velocities and supersonic winds has given some interesting results! I restarted the run at step 12000 which was a bit before the surface velocities went crazy before. Here's a plot of v_surf/csound_surf covering the period where the previous run died. |
From: Bill P. <pa...@ki...> - 2010-07-20 16:39:53
|
Hi Eric, I ran a 7 Msun case last night to check what I get. This morning I found that it had stopped after 12272 steps. Winds had reduced the mass from 7.0 to 2.73 at that time and there had been several pulses as you see here. |
From: Bill P. <pa...@ki...> - 2010-07-20 14:54:22
|
Ehsan, one more thing about the pulsation output from mesa/star -- please understand that this is a part of the code that has not had significant use yet. so there will be bugs! if you come across something that looks weird, it is probably something that has never been checked before. so let me know when you run into problems and hopefully we can find fixes. for example, yesterday Mike pointed out that I had left out a factor of r in one of the FGONG terms! easy to fix, but I'm not an FGONG user so I hadn't found the problem on my own. One other thing to be careful about: as you know mesa/star splits the model into cells. Each variable is associated with a cell. Extensive variables (M, R, L) are defined at the outer boundary of the cell. Intensive ones (lnT, lnRho, composition) are defined as cell averages -- which is approximately the same as saying that they are the cell center values. So there is a small but significant offset in location for R vs. Rho for example. For doing precision pulsation work, you need to get values from the same location -- so for example in the FGONG output, I evaluate everything at the cell boundaries, so I need to interpolate T, Rho, composition, and such from the adjacent cell centers. Cheers, Bill |
From: Bill P. <pa...@ki...> - 2010-07-20 14:54:20
|
Hi, I'm glad you are using mesa. The "create pre-main sequence" operation sometimes runs into cases that are too hard for it to do -- and it looks like the Z=0.1 is such a case. I'd suggest that you try creating what you want in a series of steps. First, create a pre-main sequence star of the desired mass but lower Z. Experiment to find a Z that is small enough to work. Let the lower Z model run for 10 or 20 steps and then save it. Now increase Z by using one of the "stellar engineering" operations that are provided as options in the &star_job part of the inlist. You might start by trying change_Z = .true. ! simply changes abundances; doesn't reconverge the model. new_Z = 0.1 That just goes into the model and changes Z in a single operation. Give it a try. If it your model can absorb the change and keep running, great. But if your model cannot converge when Z is changed so abruptly, try using relax_Z = .true. ! gradually change abundances, reconverging at each step. Hopefully one or the other of those will do the trick. Save the new model and remove the inlist commands for changing Z. I hope that works for you. Let me know! Cheers, Bill On Jul 20, 2010, at 2:47 AM, Masanobu Kunitomo wrote: > Dear Dr. Paxton, > > Hello, My name is Masanobu Kunitomo, a graduate student at Tokyo > Institute of Technology. > > First, I thank you for providing the great code MESA. > > Today I am writing to ask you about the problem for high metallicity. > While running the case of Z=0.1 with 'pre-main-sequence model', the > following message was displayed; > > ********************************************************************************* > Revision: 2258 > Tue Jul 20 17:30:20 JST 2010 > read extra star_job inlist1 from inlist--mine > loading eos data > loading kap data > finished loading > read extra controls inlist1 from inlist--mine > > > The terminal output contains the following information > > 'step' is the number of steps since the start of the run, > 'lg_dt_yr' is log10 timestep in years, > 'age_yr' is the simulated years since the start run, > 'lg_Tcntr' is log10 center temperature (K), > 'lg_Dcntr' is log10 center density (g/cm^3), > 'lg_Pcntr' is log10 center pressure (ergs/cm^3), > 'lg_Teff' is log10 surface temperature (K), > 'lg_R' is log10 surface radius (Rsun), > 'lg_L' is log10 surface luminosity (Lsun), > 'lg_LH' is log10 total PP and CNO hydrogen burning power (Lsun), > 'lg_L3a' is log10 total triple-alpha helium burning power (Lsun), > 'lg_LZ' is log10 total burning power excluding LH and L3a and > photodisintegrations (Lsun), > 'lg_LNuc' is log10 nuclear power excluding photodisintegration (Lsun), > 'lg_LNeu' is log10 total neutrino power (Lsun), > 'lg_Psurf' is log10 surface pressure (gas + radiation), > 'Mass' is the total stellar mass (Msun), > 'lg_Mdot' is log10 magnitude of rate of change of mass (Msun/year), > 'lg_Dsurf' is log10 surface density (g/cm^3), > 'H_rich' is the remaining mass outside of the hydrogen poor core, > 'H_poor' is the core mass where hydrogen abundance is <= 0.10E-03 > 'He_poor' is the core mass where helium abundance is <= 0.10E-03 > 'H_cntr' is the center H1 mass fraction, > 'He_cntr' is the center He4 mass fraction, > 'C_cntr' is the center C12 mass fraction, > 'N_cntr' is the center N14 mass fraction, > 'O_cntr' is the center O16 mass fraction, > 'Ne_cntr' is the center Ne20 mass fraction, > 'X_avg' is the star average hydrogen mass fraction, > 'Y_avg' is the star average helium mass fraction, > 'Z_avg' is the star average metallicity, > 'gam_cntr' is the center plasma interaction parameter, > 'eta_cntr' is the center electron degeneracy parameter, > 'pts' is the number of grid points in the current model, > 'jacs' is the number of jacobians created in the current step, > 'retry' is the number of step retries required during the run, > 'bckup' is the number of step backups required during the run, > 'dt_limit' is an indication of what limited the timestep. > > All this and more are saved in 'LOGS/star.log' during the run. > > create pre-main-sequence model > done build_pre_ms_model > set_initial_model_number 0 > v_flag = F > kappa_file_prefix gn93 > eos_file_prefix mesa > > > __________________________________________________________________________________________________________________________________________________ > > step lg_Tcntr lg_Teff lg_LH lg_Lnuc Mass > H_rich H_cntr N_cntr Y_surf X_avg eta_cntr pts retry > lg_dt_yr lg_Dcntr lg_Rsurf lg_L3a lg_Lneu lg_Mdot > H_poor He_cntr O_cntr Z_surf Y_avg gam_cntr jacs bckup > age_yr lg_Pcntr lg_L lg_LZ lg_Psurf lg_Dsurf > He_poor C_cntr Ne_cntr Z_cntr Z_avg v_div_cs dt_limit > __________________________________________________________________________________________________________________________________________________ > > stopping because of too many backups in a row 16 > > > > terminated evolution because hydro_failed_to_converge > > Tue Jul 20 17:39:55 JST 2010 > ********************************************************************************* > > Would you let me know what I should do? > > Sincerely, > > Masanobu Kunitomo > > ---------------------------------------------- > Masanobu Kunitomo > master's student > Department of Earth and Planetary Sciences > Tokyo Institute of Technology > Email: kun...@ge... > ---------------------------------------------- > |
From: Bill P. <pa...@ki...> - 2010-07-19 20:21:56
|
Hi Eric, Thanks for the update. It appears that now instead of a problem in the code, you may be hitting a problem with the star. This is speculation, so please feel free to show that it is wrong. But here's my current understanding. (Keep in mind that I'm a computer scientist with a smattering of astrophysics, so don't believe any of this without checking for yourself!) If you make a profile plot of pgas_div_ptotal, I expect you will find it drops to a low value just below the bottom of the outer convection zone. And that drop occurs where there is a spike in opacity. The temperature is around logT = 5.2 or so at that location (at least in the cases I've seen). And the opacity spike is from the "iron bump" that shows up around there. The high luminosity and the spike in opacity probably combine to give the small value for pgas_div_ptotal. And when prad >> pgas, things get unstable. If the timesteps are too large, the code starts getting bad things like negative cell volumes, so it responds by making the timesteps very small. By doing that it can deal with the instability, but as you note, it will never get through the AGB. All of this might be a reflection of problems with the 1D approximation in this situation; perhaps the instability turns into convection in 3D. So now forget everything I just said, and figure it out for yourself -- then tell me what's going on! BTW: just to check, you might try running your 7 Msun at very low Z to reduce the opacity. Then what happens? --Bill On Jul 19, 2010, at 1:06 PM, Eric Blais wrote: > Hi, > > I wanted to get back to you for a while but the run just kept on going! I decided to test version 2514 with a 7 solar masses star with mostly standard controls. It ran for 120 hours. It only stopped because that's the maximum amount of time allowed for a job on the cluster I'm using, so If I wanted to, I could set it to run for another 120 hours. > > After a few hours, the luminosity and temperature change very slowly, so I doubt it could eventually get to the WD stage. Towards the end, the time step was about 1.22E-6, so I guess this explains the extremely slow progress. In fact, only the first 1% of the log has a time step above 1. Were it not for this extremely slow TPAGB phase, the whole run could probably be completed on the cluster I'm using in under 5 hours. > > I thought you mind find that interesting. > > Eric Blais > > > > On Mon, Jul 12, 2010 at 3:53 PM, Bill Paxton <pa...@ki...> wrote: > Hi Eric, > > Please update to the current version. > The bug you are getting has been fixed. > Let me know how it goes! > > Thanks, > Bill > > > <HR_7Msun_test.JPG> |
From: Bill P. <pa...@ki...> - 2010-07-12 19:53:17
|
Hi Eric, Please update to the current version. The bug you are getting has been fixed. Let me know how it goes! Thanks, Bill |
From: Bill P. <pa...@ki...> - 2010-07-03 17:47:40
|
On Jul 3, 2010, at 10:01 AM, jingluan wrote: > Dear Bill, > > 1, In the inlist_massive (under work_massive_star), there are statements: > > ! at some point, trying to satisfy the residual limits becomes > hopeless, so you might as > ! well stop wasting lots of iterations trying. Here's what I did > after the end of oxygen burning. > ! > ! ! turn off resid checks early > ! max_iter_for_resid_tol1 = 3 > ! max_iter_for_resid_tol2 = -1 > ! max_iter_for_resid_tol3 = -1 > > What is residual limits please? the "residual" for an equation A = B is the value A - B which would be 0 for a perfect solution. the newton iteration will not accept a trial solution until the residuals are small enough. > And where shall one set the value of > max_iter_for_resid_tol1? did you try grep to look for it in mesa/star/public? > > 2, The nuclear reaction network "basic.net" is used for which range of > mass? What about "approx21.net"? It isn't a question of mass range but of stage of burning. Take a look in mesa/data/net_data/nets to see what isotopes and reactions are in basic.net. Do the same for approx21.net Alternatively, in your inlist, change to the net you are interested in and then set show_net_species_info = .true. show_net_reactions_info = .true. > > 3, Does EZ change nuclear reaction network due to given initial mass please? EZ only has 1 net that it uses for everything. But I doubt if you are using EZ, right? mesa/star uses the net you request -- with one exception: auto_extend_net = .true. ! if true, then automatically extend the basic net as needed ! first adds CO_burn extras and alpha to S32 ! later adds rest of alpha chain to Ni56 -B |
From: Bill P. <pa...@ki...> - 2010-07-03 17:38:02
|
Hi, Some things are calculated for you by mesa/star, but the list of things users might want to do is of course endless. The solution is to make use of run_star_extras and do it yourself. Keep a record of the previous value of photosphere_r in your modified version of run_star_extras (extras_finish_step would be the place to do that). -Bill On Jul 3, 2010, at 9:30 AM, jingluan wrote: > Dear Bill, > > How to take use of s% photosphere_r of the last model? That is to say, > now mesa is calculating 100th model for a run, and I want to calculate > the difference of photoshpere_r between 100th model and 99th model. > > Thank you very much :-) > > -- > Sincerely > Jing > > Ph.D candidate at physics.caltech > email: jin...@ca... > address: MC350-17,Caltech,1200E.California Blvd, > Pasadena, CA 91125 > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > mesa-users mailing list > mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa-users |
From: jingluan <jin...@ca...> - 2010-07-03 17:01:11
|
Dear Bill, 1, In the inlist_massive (under work_massive_star), there are statements: ! at some point, trying to satisfy the residual limits becomes hopeless, so you might as ! well stop wasting lots of iterations trying. Here's what I did after the end of oxygen burning. ! ! ! turn off resid checks early ! max_iter_for_resid_tol1 = 3 ! max_iter_for_resid_tol2 = -1 ! max_iter_for_resid_tol3 = -1 What is residual limits please? And where shall one set the value of max_iter_for_resid_tol1? 2, The nuclear reaction network "basic.net" is used for which range of mass? What about "approx21.net"? 3, Does EZ change nuclear reaction network due to given initial mass please? Thank you very much :-) -- Sincerely Jing Ph.D candidate at physics.caltech email: jin...@ca... address: MC350-17,Caltech,1200E.California Blvd, Pasadena, CA 91125 |
From: jingluan <jin...@ca...> - 2010-07-03 16:30:45
|
Dear Bill, How to take use of s% photosphere_r of the last model? That is to say, now mesa is calculating 100th model for a run, and I want to calculate the difference of photoshpere_r between 100th model and 99th model. Thank you very much :-) -- Sincerely Jing Ph.D candidate at physics.caltech email: jin...@ca... address: MC350-17,Caltech,1200E.California Blvd, Pasadena, CA 91125 |
From: Aaron D. <aar...@gm...> - 2010-07-03 13:55:49
|
Hi Jing, You have to re-compile MESA/star to get changes in the source code to take effect. MESA/star reads in the inlists every time it is run so that changes to the inlist are always accounted for. Aaron |