You can subscribe to this list here.
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(22) |
Sep
(4) |
Oct
(16) |
Nov
(9) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2019 |
Jan
(11) |
Feb
(14) |
Mar
(13) |
Apr
(1) |
May
(11) |
Jun
(1) |
Jul
(5) |
Aug
(13) |
Sep
(23) |
Oct
(9) |
Nov
(8) |
Dec
(10) |
| 2020 |
Jan
(5) |
Feb
(22) |
Mar
(45) |
Apr
(7) |
May
(2) |
Jun
(4) |
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(8) |
Nov
(3) |
Dec
(2) |
| 2021 |
Jan
(4) |
Feb
(3) |
Mar
(2) |
Apr
(2) |
May
(17) |
Jun
(14) |
Jul
(2) |
Aug
(4) |
Sep
(12) |
Oct
(30) |
Nov
(8) |
Dec
(5) |
| 2022 |
Jan
(4) |
Feb
(22) |
Mar
(9) |
Apr
(17) |
May
(4) |
Jun
(3) |
Jul
(17) |
Aug
(41) |
Sep
(117) |
Oct
(16) |
Nov
(19) |
Dec
(5) |
| 2023 |
Jan
(10) |
Feb
(13) |
Mar
(24) |
Apr
(25) |
May
(35) |
Jun
(19) |
Jul
(2) |
Aug
(13) |
Sep
(51) |
Oct
(21) |
Nov
(8) |
Dec
(13) |
| 2024 |
Jan
(2) |
Feb
(11) |
Mar
(25) |
Apr
(11) |
May
(17) |
Jun
(19) |
Jul
(10) |
Aug
(14) |
Sep
(29) |
Oct
(23) |
Nov
(25) |
Dec
(37) |
| 2025 |
Jan
(23) |
Feb
(52) |
Mar
(99) |
Apr
(40) |
May
(51) |
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
(20) |
Oct
(13) |
Nov
(22) |
Dec
(27) |
| 2026 |
Jan
(40) |
Feb
(13) |
Mar
(15) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Gokula K. T. <gk...@gm...> - 2026-03-30 17:14:53
|
Dear Mr. Tom Otohal, I just wanted to kindly follow up on my previous email regarding the issue with grid2paraview producing only zeros in cell data for a large adaptive grid. I would greatly appreciate any suggestions or guidance when you have time. Please let me know if I should provide any additional information to help diagnose the issue. Thank you very much. Best regards, Gokula On Thu, Mar 26, 2026 at 4:42 AM Steve Plimpton <sj...@gm...> wrote: > I am CCing Tom Otohal, to see if he has ideas. He is the author of the > grid2paraview tools > > Steve > > On Wed, Mar 25, 2026 at 5:07 PM Gokula Krishna Tavva <gk...@gm...> > wrote: > >> Hello everyone, >> >> I am working on a 3D rocket plume expansion simulation using SPARTA and I >> am trying to export the flow field data into ParaView using the >> grid2paraview.py tool. >> >> I am using the command: >> >> pvpython grid2paraview.py grid.desc flow_paraview -r tmp_flow_fluent.011000 >> >> In an earlier run (which unfortunately failed due to an incorrect input >> file), the grid contained approximately 63,931,484 cells and the conversion >> to ParaView files worked correctly and I was able to view the output. >> However, after the latest run, the number of cells increased to >> approximately 78,833,294. Using the same conversion process now produces >> ParaView files where all cell data arrays have a value range of [0, 0]. >> >> To diagnose the issue, I inspected the script and found that within the >> function: >> >> read_time_step_data(time_step_file_list, ug, id_hash) >> >> the VTK unstructured grid (ug) cell data arrays are initialized but do >> not appear to be populated with the flow data, resulting in zero values >> being written to the output files. However, I may be misunderstanding the >> full workflow of the script, as I have not gone through the entire code in >> detail. >> >> Since the same post-processing worked for ~64 million cells but not for >> ~79 million cells, I was wondering if this could be related to memory >> limits, array size limits, or some limitation in VTK/ParaView handling very >> large unstructured grids. >> >> Also, Is there a recommended workflow for exporting large SPARTA adaptive >> grid results to ParaView? >> >> Thank you. >> >> Best regards, >> Gokula >> _______________________________________________ >> sparta-users mailing list >> spa...@li... >> https://lists.sourceforge.net/lists/listinfo/sparta-users >> > |
|
From: Samuel C. <lg...@mo...> - 2026-03-27 19:38:27
|
Hello SPARTA community, I was hoping to get some clarification on calculating mass flux from surface reaction counts and as well as interpreting the outputs of compute_surf like n, nwt, nflux, nflux_incident, mflux, and mflux_incident, particularly in an axisymmetric domain with radial weighting. My setup is a blunted cone geometry in hypersonic flow, and I'm interested in quantifying the mass flux of carbon being removed from the surface. For now, I only have one reaction using surf_react_adsorb: O(g) + C(b) --> CO(g). The amount of carbon removed from a surface element should be directly proportional to the reaction count on that element by "mass_carbon * fnum * reaction_count" which can then be divided by "dt" to get the carbon mass flux. However this formula uses the global fnum as defined in the input script, and not the local weighted fnum. I'm using "global weight cell radius", so each grid cell's weight should be "yc * (xhi - xlo)". From the docs, I read that "n" and "nwt" are related through the weight assigned to the grid cell in which the particle collision with the surface element occurred, but I get different results when calculating the weight as "yc * (xhi - xlo)". What is the relationship here? Additionally, surf_react_adsorb creates a per-surf custom attribute "weight", but when I output it in the surface dump they're always just 1. Is this weight different than the weight from the global command? How can I relate the per-element reaction count to the amount of mass removed from the surface? As an alternate approach, I've looked into using the nflux/mflux and nflux_incident/mflux_incident outputs from compute_surf to track the mass removal since they're already normalized by the grid cell weighting. My confusion comes from the definitions of each and which is applicable to what I'm trying to do. The docs note that "*Incident flux sums over all the impacting particles, while net flux subtracts out reflected particles and includes effects from surface chemistry such as particle deletion*". I also found where this was discussed on the GitHub ( https://github.com/sparta/sparta/issues/361). Based off this definition, it seems like net flux value should correspond to the amount of CO. However the surface dump outputs seem to be showing the opposite. When no reactions take place on an element the net flux is positive and incident flux is zero, but when a reaction does occur the net flux and incident flux are both positive, with incident flux being smaller than net flux. Which definition of flux should I be considering for tracking material loss from the surface? Additionally, since particle deletion is taken into account, is the flux the amount of carbon monoxide from the surface, or just the amount of carbon? I also found that multiplying nflux_incident by oxygen's atomic mass gave the same result as mflux_incident. Does this mean the incident flux is actually the amount of oxygen rather than C or CO? If this is the case, is it valid to just multiply nflux_incident by carbon's atomic mass to get the carbon removal? Thank you in advance for the help! Best, Sam Chumney |
|
From: Steve P. <sj...@gm...> - 2026-03-25 23:12:35
|
I am CCing Tom Otohal, to see if he has ideas. He is the author of the grid2paraview tools Steve On Wed, Mar 25, 2026 at 5:07 PM Gokula Krishna Tavva <gk...@gm...> wrote: > Hello everyone, > > I am working on a 3D rocket plume expansion simulation using SPARTA and I > am trying to export the flow field data into ParaView using the > grid2paraview.py tool. > > I am using the command: > > pvpython grid2paraview.py grid.desc flow_paraview -r tmp_flow_fluent.011000 > > In an earlier run (which unfortunately failed due to an incorrect input > file), the grid contained approximately 63,931,484 cells and the conversion > to ParaView files worked correctly and I was able to view the output. > However, after the latest run, the number of cells increased to > approximately 78,833,294. Using the same conversion process now produces > ParaView files where all cell data arrays have a value range of [0, 0]. > > To diagnose the issue, I inspected the script and found that within the > function: > > read_time_step_data(time_step_file_list, ug, id_hash) > > the VTK unstructured grid (ug) cell data arrays are initialized but do > not appear to be populated with the flow data, resulting in zero values > being written to the output files. However, I may be misunderstanding the > full workflow of the script, as I have not gone through the entire code in > detail. > > Since the same post-processing worked for ~64 million cells but not for > ~79 million cells, I was wondering if this could be related to memory > limits, array size limits, or some limitation in VTK/ParaView handling very > large unstructured grids. > > Also, Is there a recommended workflow for exporting large SPARTA adaptive > grid results to ParaView? > > Thank you. > > Best regards, > Gokula > _______________________________________________ > sparta-users mailing list > spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparta-users > |
|
From: Gokula K. T. <gk...@gm...> - 2026-03-25 23:07:36
|
Hello everyone, I am working on a 3D rocket plume expansion simulation using SPARTA and I am trying to export the flow field data into ParaView using the grid2paraview.py tool. I am using the command: pvpython grid2paraview.py grid.desc flow_paraview -r tmp_flow_fluent.011000 In an earlier run (which unfortunately failed due to an incorrect input file), the grid contained approximately 63,931,484 cells and the conversion to ParaView files worked correctly and I was able to view the output. However, after the latest run, the number of cells increased to approximately 78,833,294. Using the same conversion process now produces ParaView files where all cell data arrays have a value range of [0, 0]. To diagnose the issue, I inspected the script and found that within the function: read_time_step_data(time_step_file_list, ug, id_hash) the VTK unstructured grid (ug) cell data arrays are initialized but do not appear to be populated with the flow data, resulting in zero values being written to the output files. However, I may be misunderstanding the full workflow of the script, as I have not gone through the entire code in detail. Since the same post-processing worked for ~64 million cells but not for ~79 million cells, I was wondering if this could be related to memory limits, array size limits, or some limitation in VTK/ParaView handling very large unstructured grids. Also, Is there a recommended workflow for exporting large SPARTA adaptive grid results to ParaView? Thank you. Best regards, Gokula |
|
From: Steve P. <sj...@gm...> - 2026-03-16 17:56:07
|
The best person to answer these Qs is Krishnan, the author of the surf_react adsorb model in SPARTA. I've CCd him on this email. Steve On Mon, Mar 16, 2026 at 10:44 AM Samuel Chumney via sparta-users < spa...@li...> wrote: > Good morning/afternoon SPARTA users, > > My name is Sam Chumney, I am a PhD student researching ablating air-carbon > systems. Currently I am working on validating my DSMC simulations against > experimental data and results from some CFD papers. Particularly, one > paper that I'm following uses Prata's Air-Carbon Ablation model. From my > understanding, most (if not all) of those reactions could be implemented by > using the "surf_react adsorb" reaction style. The full list of reactions > and their rates are attached for reference. > I'm curious to know if anyone has experience using this model in SPARTA > already? > Additionally, this model proposes two types of adsorbed oxygen, a weakly > bound O(s) (single bond with C) and a strongly bound O*(s) (double bond > with C), which have different adsorption selectivities and different > desorption rates. Does SPARTA have a way to represent multiple adsorbed > states of the same species? > > Thank you for your time, > Sam > _______________________________________________ > sparta-users mailing list > spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparta-users > |
|
From: Samuel C. <lg...@mo...> - 2026-03-16 17:43:36
|
Good morning/afternoon SPARTA users, My name is Sam Chumney, I am a PhD student researching ablating air-carbon systems. Currently I am working on validating my DSMC simulations against experimental data and results from some CFD papers. Particularly, one paper that I'm following uses Prata's Air-Carbon Ablation model. From my understanding, most (if not all) of those reactions could be implemented by using the "surf_react adsorb" reaction style. The full list of reactions and their rates are attached for reference. I'm curious to know if anyone has experience using this model in SPARTA already? Additionally, this model proposes two types of adsorbed oxygen, a weakly bound O(s) (single bond with C) and a strongly bound O*(s) (double bond with C), which have different adsorption selectivities and different desorption rates. Does SPARTA have a way to represent multiple adsorbed states of the same species? Thank you for your time, Sam |
|
From: WUYONGXIN <183...@16...> - 2026-03-13 05:37:33
|
Dear Steve ,
How can I set the initial emission velocity of particles on a custom incident surface to be related to the particle emission coordinates (x, y, z) while using the emit/surf command (given that emit/face/file is probably not usable)?
Thank you for your reply. |
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-03-12 19:39:54
|
Good morning, Stan! FYI. Rama From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) Sent: Wednesday, March 11, 2026 2:29 PM To: 'spa...@li...' <spa...@li...> Subject: 2daxisymm and 3D My 2D-axisymmetric plume simulation converges to an Nexit of about 15.64million particles. My 3D, quarter geometry simulation for the same case using the same FNUM converges to an Nexit of about 3.86million particles. The ratio 15.65/3.86 ~=4, seems to indicate that the 2D-axisymmetric simulation is computing Nexit for the full 3D geometry and not the half geometry that it uses for computing forces. Is that correct? Rama |
|
From: WUYONGXIN <183...@16...> - 2026-03-12 10:48:01
|
Dear Steve ,
I declared a transparent surface in the import of the calculation file surface and later assigned this surface as a transparent surface.
Why does the log file remind of the following error at the end:
stats_style step cpu np nattempt ncoll nscoll nscheck
run 20000
ERROR: 2 transparent surface elements with invalid collision model or reaction model (/home/wu/桌面/sparta/src/surf.cpp:397) |
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-03-11 21:29:23
|
My 2D-axisymmetric plume simulation converges to an Nexit of about 15.64million particles. My 3D, quarter geometry simulation for the same case using the same FNUM converges to an Nexit of about 3.86million particles. The ratio 15.65/3.86 ~=4, seems to indicate that the 2D-axisymmetric simulation is computing Nexit for the full 3D geometry and not the half geometry that it uses for computing forces. Is that correct? Rama |
|
From: <gre...@us...> - 2026-03-10 19:15:03
|
I want to perform a simulation where I add species at an angle of no less than 30 degrees relative to the inlet surface normal. I attempted this by assigning a streaming velocity to the negative normal of the surface, but this didn't obtain the desired result. I looked at fix emit/face file and I didn't see an option to assign the velocity. I think the next thing to try is to write a custom fix emit/face that borrows the concept from surf collide td but before I try, am I missing an easier way to do this? Greg |
|
From: Steve P. <sj...@gm...> - 2026-03-06 20:36:34
|
Hi - I don't see the specific issue that is causing this error, but there are at least a couple problems with your input script. (1) your stats_style command cannot output values from c_1, c_2, etc. Because those computes produce per-grid data, not a global scalar value required by the stats_style command. (2) similarly for your three CR variable definitions. You are defining an equal-style variable and using per-grid computes in the formula. You can only use scalar values in the formula for a equal-style variable. I suggest you read Section 6.4 of the manual which describes the different kinds of output (global, per-grid, ect) that various SPARTA commands produce as output, or require as input. Steve On Sun, Mar 1, 2026 at 4:34 AM hadiyat Ullah <had...@gm...> wrote: > Hope you all are doing well. > i am Hadiyat Ullah Khan,master student in aerospace engineering ,research > direction ""intake system geometry of Atmosphere breathing electric > Propulsion for very low earth orbit"" > i have some question. > i am simulation flow through parabolic intake now i have set the the > information i am sharing here with you > > seed 22339 > units si > dimension 2 > #-------------------------------------------------- > boundary o ar p > create_box -0.1 1.1 0 0.55 -0.5 0.5 > region para1 block -0.05 1.05 0 0.575 -0.5 0.5 > region para2 block 0 1 0 0.5 -0.5 0.5 > > create_grid 100 50 1 & > levels 3 & > region 2 para1 2 2 1 & > region 3 para2 2 2 1 > balance_grid rcb cell > write_grid data.grid.refined > #---Entry sampling area: x-direction [-0.005, 0.005], y-direction [0, 0.5] > --- > region inlet_sample_region block -0.005 0.005 0.0 0.50 -0.5 0.5 > > #---Exit sampling area: x-direction [0.995, 1.005], y-direction [0, 0.15] > --- > region outlet_sample_region block 0.995 1.005 0.0 0.15 -0.5 0.5 > #---Create mesh groups for subsequent number density calculations --- > group inlet_cells grid region inlet_sample_region one > group outlet_cells grid region outlet_sample_region one > #--------------------------------------------------------- > global nrho 7.058e15 fnum 1.0e8 > variable Ain equal 0.785398 > variable Aout equal 0.070685 > #--------------------------------------------------------- > # Surface load > read_surf parabolic_wall.surf group solid > read_surf monitor.surf transparent group monitor > read_surf monitor2.surf transparent group monitor2 > #---------------------------------------- > species air.species N2 O > mixture air N2 O nrho 7.058e15 vstream 7782.0 0.0 0.0 temp 813.6 > mixture air N2 frac 0.37 > mixture air O frac 0.63 > > surf_collide 1 diffuse 300.0 1.0 > surf_collide 2 transparent > surf_collide 3 transparent > > surf_modify solid collide 1 > surf_modify monitor collide 2 > surf_modify monitor2 collide 2 > #--------------------------------------------------- > collide vss air air.vss > #------------------------------------------------- > fix in emit/face air xlo > # ============================================================ > # COLLECTION EFFICIENCY - CORRECTED (Example ke hisaab se) > # ============================================================ > > # Inlet density - grid compute with multiple keywords > compute 1 inlet_cells grid inflow/N2 nrho > compute 2 inlet_cells grid inflow/O nrho > compute 3 inlet_cells grid inflow nrho > > # Outlet density > compute 4 outlet_cells grid inflow/N2 nrho > compute 5 outlet_cells grid inflow/O nrho > compute 6 outlet_cells grid inflow nrho > > # Weighted counts (nwt) - grid compute with nwt keyword > compute 7 inlet_cells grid inflow/N2 nwt > compute 8 inlet_cells grid inflow/O nwt > compute 9 inlet_cells grid inflow nwt > > compute 10 outlet_cells grid inflow/N2 nwt > compute 11 outlet_cells grid inflow/O nwt > compute 12 outlet_cells grid inflow nwt > > # Reduce sums - total weighted counts > compute 13 reduce sum c_7 c_8 c_9 > compute 14 reduce sum c_10 c_11 c_12 > # Collection efficiency variables > variable eta_N2 equal c_14[1]/c_13[1] > variable eta_O equal c_14[2]/c_13[2] > variable eta_all equal c_14[3]/c_13[3] > > # Concentration ratio variables > variable CR_nrho_N2 equal c_4/c_1 > variable CR_nrho_O equal c_5/c_2 > variable CR_nrho_all equal c_6/c_3 > > # Stats style - example jaisa hi format > stats_style step cpu np ncoll nscoll & > c_1 c_2 c_3 & > c_4 c_5 c_6 & > c_7 c_8 c_9 & > c_10 c_11 c_12 & > v_eta_N2 v_eta_O v_eta_all & > v_CR_nrho_N2 v_CR_nrho_O v_CR_nrho_all > > timestep 1e-7 > stats 100 > stats_style step np > dump 2 image all 100 image.*.ppm type type pdiam 0.003 & > surf proc 0.01 size 512 512 zoom 1.75 > dump_modify 2 pad 4 > run 1000 > > > > i want to compute density,collection efficiency,compression ratio at > outlet and inlet ,but when i run this script i got this error > > ERROR: Unrecognized compute style (../modify.cpp:467) > > > can you please guide me,then how i use the compute command or i use > another instead of compute because i not found it in manual. > Thanks for your help > i am waiting patiently. > Best Regards > _______________________________________________ > sparta-users mailing list > spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparta-users > |
|
From: Moore, S. <st...@sa...> - 2026-03-04 16:33:09
|
Hi ZiXin, I looked at the surf_collide code, and acc_vib is defined as a double, so I cannot see any bugs. Can you please attach a minimal working example that shows the issue? Thanks, Stan ________________________________ From: 刘子新 <liu...@ma...> Sent: Wednesday, March 4, 2026 5:03 AM To: spa...@li... <spa...@li...> Subject: [EXTERNAL] [sparta-users] Inquiry about a possible bug in the surf_collide Command (acc_vib parameter) Some people who received this message don't often get email from liu...@ma.... Learn why this is important<https://aka.ms/LearnAboutSenderIdentification> Dear SPARTA Developers, My name is ZiXin Liu, I am writing to report a potential issue I have encountered and to seek your advice. Description of the Issue: I have discovered a potential inconsistency related to the acc_vib parameter in the surf_collide Command. The parameter does not appear to correctly reflect the effect of incomplete accommodation of vibrational energy at the wall. While a difference in calculated wall heat flux is observed between cases where acc_vib = 0 and acc_vib ≠ 0(which is expected), varying the value of acc_vib when it is non-zero (e.g., setting it to 0.01, 0.1, etc.) always yields the same result as when acc_vib = 1. In summary: * acc_vib = 0→ Result A * acc_vib = 1, 0.1, 0.01, or any other non-zero value→ Result B (all identical) My Questions: 1. Is this the intended behavior, or could it be a program bug? 2. If it is a bug, what would be the recommended way to address it? Any guidance on a potential fix or workaround would be greatly appreciated. Thank you for your time and attention to this matter. I look forward to any insights you may have. Best wishes |
|
From: 刘子新 <liu...@ma...> - 2026-03-04 12:21:19
|
Dear SPARTA Developers, My name is ZiXin Liu, I am writing to report a potential issue I have encountered and to seek your advice. Description of the Issue: I have discovered a potential inconsistency related to the acc_vib parameter in the surf_collide Command. The parameter does not appear to correctly reflect the effect of incomplete accommodation of vibrational energy at the wall. While a difference in calculated wall heat flux is observed between cases where acc_vib = 0 and acc_vib ≠ 0(which is expected), varying the value of acc_vib when it is non-zero (e.g., setting it to 0.01, 0.1, etc.) always yields the same result as when acc_vib = 1. In summary: acc_vib = 0→ Result A acc_vib = 1, 0.1, 0.01, or any other non-zero value→ Result B (all identical) My Questions: Is this the intended behavior, or could it be a program bug? If it is a bug, what would be the recommended way to address it? Any guidance on a potential fix or workaround would be greatly appreciated. Thank you for your time and attention to this matter. I look forward to any insights you may have. Best wishes |
|
From: hadiyat U. <had...@gm...> - 2026-03-01 11:33:30
|
Hope you all are doing well.
i am Hadiyat Ullah Khan,master student in aerospace engineering ,research
direction ""intake system geometry of Atmosphere breathing electric
Propulsion for very low earth orbit""
i have some question.
i am simulation flow through parabolic intake now i have set the the
information i am sharing here with you
seed 22339
units si
dimension 2
#--------------------------------------------------
boundary o ar p
create_box -0.1 1.1 0 0.55 -0.5 0.5
region para1 block -0.05 1.05 0 0.575 -0.5 0.5
region para2 block 0 1 0 0.5 -0.5 0.5
create_grid 100 50 1 &
levels 3 &
region 2 para1 2 2 1 &
region 3 para2 2 2 1
balance_grid rcb cell
write_grid data.grid.refined
#---Entry sampling area: x-direction [-0.005, 0.005], y-direction [0, 0.5]
---
region inlet_sample_region block -0.005 0.005 0.0 0.50 -0.5 0.5
#---Exit sampling area: x-direction [0.995, 1.005], y-direction [0, 0.15]
---
region outlet_sample_region block 0.995 1.005 0.0 0.15 -0.5 0.5
#---Create mesh groups for subsequent number density calculations ---
group inlet_cells grid region inlet_sample_region one
group outlet_cells grid region outlet_sample_region one
#---------------------------------------------------------
global nrho 7.058e15 fnum 1.0e8
variable Ain equal 0.785398
variable Aout equal 0.070685
#---------------------------------------------------------
# Surface load
read_surf parabolic_wall.surf group solid
read_surf monitor.surf transparent group monitor
read_surf monitor2.surf transparent group monitor2
#----------------------------------------
species air.species N2 O
mixture air N2 O nrho 7.058e15 vstream 7782.0 0.0 0.0 temp 813.6
mixture air N2 frac 0.37
mixture air O frac 0.63
surf_collide 1 diffuse 300.0 1.0
surf_collide 2 transparent
surf_collide 3 transparent
surf_modify solid collide 1
surf_modify monitor collide 2
surf_modify monitor2 collide 2
#---------------------------------------------------
collide vss air air.vss
#-------------------------------------------------
fix in emit/face air xlo
# ============================================================
# COLLECTION EFFICIENCY - CORRECTED (Example ke hisaab se)
# ============================================================
# Inlet density - grid compute with multiple keywords
compute 1 inlet_cells grid inflow/N2 nrho
compute 2 inlet_cells grid inflow/O nrho
compute 3 inlet_cells grid inflow nrho
# Outlet density
compute 4 outlet_cells grid inflow/N2 nrho
compute 5 outlet_cells grid inflow/O nrho
compute 6 outlet_cells grid inflow nrho
# Weighted counts (nwt) - grid compute with nwt keyword
compute 7 inlet_cells grid inflow/N2 nwt
compute 8 inlet_cells grid inflow/O nwt
compute 9 inlet_cells grid inflow nwt
compute 10 outlet_cells grid inflow/N2 nwt
compute 11 outlet_cells grid inflow/O nwt
compute 12 outlet_cells grid inflow nwt
# Reduce sums - total weighted counts
compute 13 reduce sum c_7 c_8 c_9
compute 14 reduce sum c_10 c_11 c_12
# Collection efficiency variables
variable eta_N2 equal c_14[1]/c_13[1]
variable eta_O equal c_14[2]/c_13[2]
variable eta_all equal c_14[3]/c_13[3]
# Concentration ratio variables
variable CR_nrho_N2 equal c_4/c_1
variable CR_nrho_O equal c_5/c_2
variable CR_nrho_all equal c_6/c_3
# Stats style - example jaisa hi format
stats_style step cpu np ncoll nscoll &
c_1 c_2 c_3 &
c_4 c_5 c_6 &
c_7 c_8 c_9 &
c_10 c_11 c_12 &
v_eta_N2 v_eta_O v_eta_all &
v_CR_nrho_N2 v_CR_nrho_O v_CR_nrho_all
timestep 1e-7
stats 100
stats_style step np
dump 2 image all 100 image.*.ppm type type pdiam 0.003 &
surf proc 0.01 size 512 512 zoom 1.75
dump_modify 2 pad 4
run 1000
i want to compute density,collection efficiency,compression ratio at outlet
and inlet ,but when i run this script i got this error
ERROR: Unrecognized compute style (../modify.cpp:467)
can you please guide me,then how i use the compute command or i use another
instead of compute because i not found it in manual.
Thanks for your help
i am waiting patiently.
Best Regards
|
|
From: <tim...@un...> - 2026-02-16 16:42:25
|
Dear SPARTA team, Months later, I finally got around to posting an issue about this on the SPARTA GitHub: https://github.com/sparta/sparta/issues/605 This may have helped me to find the solution: I now think that the n parameter needs to be an integer, and that when SPARTA expects an integer, the fractional part and/or exponent is simply ignored. This would then lead to 1e3 being interpreted as 1, which then according to the other issue ( https://github.com/sparta/sparta/issues/487 ) leads to an unexpected distribution of particles as seen in the images of my first message (which turns out not to be a bug at all, just a mismatch between expectations). Thank you and best wishes Tim On 7 Nov 2025, at 16:49, Mosimann, Tim (SPACE) <tim...@un...> wrote: Dear SPARTA team, I hope this email is directed to the right recipients, on https://sparta.github.io/unbug.html it says to send you an email if one is not sure whether something is a bug. While trying to simulate a simple flow of particles from the zlo face of a box of 10x10x10 cells, I encountered some strange behaviour: – When adding 1000 particles per timestep, the code works as expected: <Np1000.jpg> – When adding 1e3 particles per timestep, the code only emits particles in a specific cell’s face: <Np1e3.jpg> This may be related to the bug reported in https://github.com/sparta/sparta/issues/487 – When adding 100 particles per timestep, the code works as expected: <Np100.jpg> – When adding 10 particles per timestep, the code only emits particles in one strip of cells: <Np10.jpg> – When adding 1 particle per timestep, the code again only emits particles in a specific cell: <Np1.jpg> This seems to imply that the code treats n 1000 as 1000 particles per timestep, but n 1e3 as 1 particle per timestep. Some further testing did not disprove the theory that any exponential notation is ignored by this command and only the number before the e is used. This leads me to the following questions: Is the theory right? If so, is such ignoring of exponential notation a bug, or is one not intended to specify Np in exponential notation? And if it is the latter, is there a guideline as to when it is okay to write floats in exponential notation vs when SPARTA ignores it? Thank you and best wishes Tim ––––––––– MRE: seed 12345 boundary oo oo oo create_box 0 1 0 1 0 1 create_grid 10 10 10 species ar.species Ar mixture air Ar vstream 0.0 0.0 0.0 temp 273. fix in emit/face air zlo n 1e3 perspecies no dump im image all 100 out/image.*.ppm proc type pdiam 0.03 dump_modify im pad 4 timestep 1e-6 run 1000 (replace the 1e3 by other values as wished, ar.species copied from an example folder) SPARTA version 20Jan2025 on a Mac |
|
From: Gallis, M. A. <ma...@sa...> - 2026-02-12 15:25:37
|
Federica The axisymmetric calculation covers only half of the domain. Thus, the y-force will not be zero. The code integrates over the geometry it computes. The total force would include an equal and opposite sign contribution for the part you don’t simulate. If you include the negative y contribution the y-force will be identically zero. Michael. From: Federica Portis via sparta-users <spa...@li...> Date: Thursday, February 12, 2026 at 8:13 AM To: spa...@li... <spa...@li...> Subject: [EXTERNAL] [sparta-users] Forces compute in axi-symmetric simulations Dear SPARTA users, I am having some trouble understanding what happens in the force calculation command compute surf in the axi-symmetric case. In particular, I ran some axi-symmetric simulations to compare previously simulated 3D cases. For the x component of the force (and thus the drag), I obtain a result consistent with the 3D case, with a difference below 1%. However, for the y component of the force I obtain a non-zero value that is not consistent, and it cannot be explained as statistical noise. I am wondering whether SPARTA handles the force calculation differently in the two directions. Specifically, does it integrate fx as already derived from the body’s rotation, but not apply the same treatment in the y direction? This aspect is not clarified in the manual. Thank you for your support. Best, Federica Portis |
|
From: Federica P. <fed...@po...> - 2026-02-12 15:11:35
|
Dear SPARTA users, I am having some trouble understanding what happens in the force calculation command compute surf in the axi-symmetric case. In particular, I ran some axi-symmetric simulations to compare previously simulated 3D cases. For the x component of the force (and thus the drag), I obtain a result consistent with the 3D case, with a difference below 1%. However, for the y component of the force I obtain a non-zero value that is not consistent, and it cannot be explained as statistical noise. I am wondering whether SPARTA handles the force calculation differently in the two directions. Specifically, does it integrate fx as already derived from the body’s rotation, but not apply the same treatment in the y direction? This aspect is not clarified in the manual. Thank you for your support. Best, Federica Portis |
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-02-10 16:32:17
|
Thank you, Michael.
Rama
From: Gallis, Michael A. <ma...@sa...>
Sent: Tuesday, February 10, 2026 7:20 AM
To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...>; spa...@li...
Subject: EXT :Re: [EXTERNAL] [sparta-users] create_grid - subset
Rama
You need to compile with the -DSPARTA_BIGBIG option, to allow for 64 bits.
Michael
From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>>
Date: Monday, February 9, 2026 at 8:06 PM
To: spa...@li...<mailto:spa...@li...> <spa...@li...<mailto:spa...@li...>>
Subject: [EXTERNAL] [sparta-users] create_grid - subset
I am able to run a 3D simulation involving an emitting surface with a box in the first quadrant (y>0, z>0) quite successfully using the subset command
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
Created 686240 child grid cells
Total number of cells = 3115520.
When I increase the subset to 15X15X15, the code bombs with thefollowing message:
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
ERROR: Hierarchical grid induces cell IDs that exceed 32 bits (../create_grid.cpp:247)
What causes this error considering the fact that the total number of cells is expected to be only in millions?
Rama
|
|
From: Gallis, M. A. <ma...@sa...> - 2026-02-10 15:20:44
|
Rama
You need to compile with the -DSPARTA_BIGBIG option, to allow for 64 bits.
Michael
From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...>
Date: Monday, February 9, 2026 at 8:06 PM
To: spa...@li... <spa...@li...>
Subject: [EXTERNAL] [sparta-users] create_grid - subset
I am able to run a 3D simulation involving an emitting surface with a box in the first quadrant (y>0, z>0) quite successfully using the subset command
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
Created 686240 child grid cells
Total number of cells = 3115520.
When I increase the subset to 15X15X15, the code bombs with thefollowing message:
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
ERROR: Hierarchical grid induces cell IDs that exceed 32 bits (../create_grid.cpp:247)
What causes this error considering the fact that the total number of cells is expected to be only in millions?
Rama
|
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-02-10 03:04:40
|
I am able to run a 3D simulation involving an emitting surface with a box in the first quadrant (y>0, z>0) quite successfully using the subset command
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 15 15 15
Created 686240 child grid cells
Total number of cells = 3115520.
When I increase the subset to 15X15X15, the code bombs with thefollowing message:
create_grid ${ncells_x} ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16 # number of cells in the x, y, and z directions
create_grid 128 ${ncells_y} ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 ${ncells_z} clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
create_grid 128 64 64 clump xyz levels 2 subset 2 32*34 1*4 1*4 16 16 16
ERROR: Hierarchical grid induces cell IDs that exceed 32 bits (../create_grid.cpp:247)
What causes this error considering the fact that the total number of cells is expected to be only in millions?
Rama
|
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-02-08 02:31:49
|
Hello Stan, Are you planning to change 'fix emit' back to the old version in the next release? Rama From: Moore, Stan <st...@sa...> Sent: Tuesday, December 23, 2025 3:27 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...>; Steve Plimpton <sj...@gm...>; Moore, Stan via sparta-users <spa...@li...>; Gallis, Michael A. <ma...@sa...> Subject: EXT :Re: [EXTERNAL] [sparta-users] Possible bug Hi Rama, In fix emit/surf, the "nrho" keyword was changed to "density and the "temp" keyword was changed to "temperature". I'm tempted to change it back though since it has tripped up a lot of people. Stan ________________________________ From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Sent: Tuesday, December 23, 2025 4:07 PM To: Steve Plimpton <sj...@gm...<mailto:sj...@gm...>>; Moore, Stan via sparta-users <spa...@li...<mailto:spa...@li...>>; Gallis, Michael A. <ma...@sa...<mailto:ma...@sa...>> Subject: [EXTERNAL] [sparta-users] Possible bug A case with following command runs when I use 4Sep2024 version. When I tried to run the same case using 24Sep2025 version I encountered the error message "Illegal fix emit/surf command". fix in emit/surf plume_species Plume custom temp s_mytemp custom nrho s_mynrho custom vstream s_mystream Rama |
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-02-07 20:06:43
|
Thank you, Pablo. Rama From: Pablo Rubial Yáñez <pab...@gm...> Sent: Saturday, February 7, 2026 12:04 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...> Cc: Steve Plimpton <sj...@gm...>; spa...@li... Subject: EXT :Re: [sparta-users] Re: Re: [EXTERNAL] compute lambda/grid Hi Rama I think the following change will fix that error: fix 2 ave/grid all 1 100 100 c_2[1] Best regards, Pablo El sáb, 7 feb 2026, 20:43, Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> escribió: Hello Steve, Finally, I got the opportunity to try your suggestion using version 24Sep2025. compute 1 grid all species nrho compute 2 grid all all temp fix 1 ave/grid all 1 100 100 c_1[*] fix 2 ave/grid all 1 100 100 c_2 It returned with following error message ERROR: Fix ave/grid compute does not calculate per-grid vector (../fix_ave_grid.cpp:156) Rama From: Steve Plimpton <sj...@gm...<mailto:sj...@gm...>> Sent: Thursday, January 15, 2026 3:40 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Cc: Moore, Stan <st...@sa...<mailto:st...@sa...>>; spa...@li...<mailto:spa...@li...> Subject: EXT :Re: [sparta-users] Re: [EXTERNAL] compute lambda/grid Try these lines and I think it should work: compute 1 grid all species nrho compute 2 grid all all temp fix 1 ave/grid all 10 100 1000 c_1[*] fix 2 ave/grid all 10 100 1000 c_2 compute 3 lambda/grid f_1[*] f_2 lambda tau dump 1 grid all 1000 tmp.grid id c_3[*] The key point is do not use one compute grid command for per-species output with both nrho and temp as args. The issue giving the error on line 372 in your latest message is that the "nrho" argument to compute lambda/grid needs to have Nspecies # of columns in the fix and corresponding compute. If you include "temp" in the compute, then it will have 2*Nspecies columns. Also, the "temp" keyword is meant to be the ave temp over all species, so compute 2 should use "all" not "species" so that 1 value (per grid cell) is produced by compute 2. The latest doc page for compute lambda/grid needs to be updated to reflect this email and the current version of the code. Steve On Wed, Jan 7, 2026 at 7:42 PM Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> wrote: I tried a ‘fresh’ start case and that also failed. Rama From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Sent: Wednesday, January 7, 2026 6:26 PM To: Moore, Stan <st...@sa...<mailto:st...@sa...>>; spa...@li...<mailto:spa...@li...> Subject: EXT :Re: [sparta-users] [EXTERNAL] compute lambda/grid Hello Stan, I tried to restart using version 24Sep2025 Reading restart file ... restart file = 4 Sep 2024, SPARTA = 24 Sep 2025 I added the following lines to compute lambda: compute 1 grid all species nrho temp fix 1 ave/grid all 1 100 100 c_1[*] compute 2 lambda/grid f_1[*] f_1[2] lambda tau The code exited with the following message: ERROR: Compute lambda/grid nrho does not match system species count (../compute_lambda_grid.cpp:372) I am wondering whether the error is related to restart. Rama From: Moore, Stan <st...@sa...<mailto:st...@sa...>> Sent: Wednesday, January 7, 2026 12:39 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>>; spa...@li...<mailto:spa...@li...> Subject: EXT :Re: [EXTERNAL] [sparta-users] compute lambda/grid The syntax for compute lambda grid changed in the 20Jan2025 version of SPARTA, as noted here: https://github.com/sparta/sparta/releases. The 4-sep2024 version is getting pretty old and out of sync with the docs, please consider upgrading. Stan ________________________________ From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Sent: Wednesday, January 7, 2026 1:25 PM To: spa...@li...<mailto:spa...@li...> <spa...@li...<mailto:spa...@li...>> Subject: [EXTERNAL] [sparta-users] compute lambda/grid I copied the command for computing ‘lambda’ from the manual, modified frequency for the fix command, excluded ‘tau’ in ‘compute 2’ and ran my case. The version used was: sparta/4-sep2024 The run encountered the message shown below. [cid:image001.png@01DC982A.30105F60] Is this a bug? Rama ama _______________________________________________ sparta-users mailing list spa...@li...<mailto:spa...@li...> https://lists.sourceforge.net/lists/listinfo/sparta-users _______________________________________________ sparta-users mailing list spa...@li...<mailto:spa...@li...> https://lists.sourceforge.net/lists/listinfo/sparta-users |
|
From: Pablo R. Y. <pab...@gm...> - 2026-02-07 20:04:33
|
Hi Rama I think the following change will fix that error: fix 2 ave/grid all 1 100 100 c_2[1] Best regards, Pablo El sáb, 7 feb 2026, 20:43, Ramakrishnan, Sekaripuram [US] (DS) (Contr) < Sek...@ng...> escribió: > Hello Steve, > > > > Finally, I got the opportunity to try your suggestion using version > 24Sep2025. > > > > compute 1 grid all species nrho > > compute 2 grid all all temp > > fix 1 ave/grid all 1 100 100 c_1[*] > > fix 2 ave/grid all 1 100 100 c_2 > > > > It returned with following error message > > ERROR: Fix ave/grid compute does not calculate per-grid vector > (../fix_ave_grid.cpp:156) > > > > Rama > > > > *From:* Steve Plimpton <sj...@gm...> > *Sent:* Thursday, January 15, 2026 3:40 PM > *To:* Ramakrishnan, Sekaripuram [US] (DS) (Contr) < > Sek...@ng...> > *Cc:* Moore, Stan <st...@sa...>; spa...@li... > *Subject:* EXT :Re: [sparta-users] Re: [EXTERNAL] compute lambda/grid > > > > Try these lines and I think it should work: > > > > compute 1 grid all species nrho > compute 2 grid all all temp > fix 1 ave/grid all 10 100 1000 c_1[*] > fix 2 ave/grid all 10 100 1000 c_2 > compute 3 lambda/grid f_1[*] f_2 lambda tau > dump 1 grid all 1000 tmp.grid id c_3[*] > > The key point is do not use one compute grid command for per-species > output with both nrho and temp as args. > > The issue giving the error on line 372 in your latest message is that the > "nrho" argument to compute lambda/grid needs to have > > Nspecies # of columns in the fix and corresponding compute. If you > include "temp" in the compute, then it will have 2*Nspecies columns. > > > > Also, the "temp" keyword is meant to be the ave temp over all species, so > compute 2 should use "all" not "species" so that 1 value (per grid cell) > > is produced by compute 2. > > > > The latest doc page for compute lambda/grid needs to be updated to reflect > this email and the current version of the code. > > > > Steve > > > > > > On Wed, Jan 7, 2026 at 7:42 PM Ramakrishnan, Sekaripuram [US] (DS) > (Contr) <Sek...@ng...> wrote: > > I tried a ‘fresh’ start case and that also failed. > > > > Rama > > > > *From:* Ramakrishnan, Sekaripuram [US] (DS) (Contr) < > Sek...@ng...> > *Sent:* Wednesday, January 7, 2026 6:26 PM > *To:* Moore, Stan <st...@sa...>; spa...@li... > *Subject:* EXT :Re: [sparta-users] [EXTERNAL] compute lambda/grid > > > > Hello Stan, > > > > I tried to restart using version 24Sep2025 > > > > Reading restart file ... > > restart file = 4 Sep 2024, SPARTA = 24 Sep 2025 > > > > I added the following lines to compute lambda: > > > > compute 1 grid all species nrho temp > > fix 1 ave/grid all 1 100 100 c_1[*] > > compute 2 lambda/grid f_1[*] f_1[2] lambda tau > > > > The code exited with the following message: > > > > ERROR: Compute lambda/grid nrho does not match system species count > (../compute_lambda_grid.cpp:372) > > > > I am wondering whether the error is related to restart. > > > > Rama > > > > *From:* Moore, Stan <st...@sa...> > *Sent:* Wednesday, January 7, 2026 12:39 PM > *To:* Ramakrishnan, Sekaripuram [US] (DS) (Contr) < > Sek...@ng...>; spa...@li... > *Subject:* EXT :Re: [EXTERNAL] [sparta-users] compute lambda/grid > > > > The syntax for compute lambda grid changed in the 20Jan2025 version of > SPARTA, as noted here: https://github.com/sparta/sparta/releases. The > 4-sep2024 version is getting pretty old and out of sync with the docs, > please consider upgrading. > > > > Stan > > > ------------------------------ > > *From:* Ramakrishnan, Sekaripuram [US] (DS) (Contr) < > Sek...@ng...> > *Sent:* Wednesday, January 7, 2026 1:25 PM > *To:* spa...@li... < > spa...@li...> > *Subject:* [EXTERNAL] [sparta-users] compute lambda/grid > > > > I copied the command for computing ‘lambda’ from the manual, modified > frequency for the fix command, excluded ‘tau’ in ‘compute 2’ and ran my > case. > > > > The version used was: sparta/4-sep2024 > > > > The run encountered the message shown below. > > > > > > Is this a bug? > > > > > > Rama > > > > ama > > _______________________________________________ > sparta-users mailing list > spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparta-users > > _______________________________________________ > sparta-users mailing list > spa...@li... > https://lists.sourceforge.net/lists/listinfo/sparta-users > |
|
From: Ramakrishnan, S. [U. (D. (Contr)
<Sek...@ng...> - 2026-02-07 19:43:06
|
Hello Steve, Finally, I got the opportunity to try your suggestion using version 24Sep2025. compute 1 grid all species nrho compute 2 grid all all temp fix 1 ave/grid all 1 100 100 c_1[*] fix 2 ave/grid all 1 100 100 c_2 It returned with following error message ERROR: Fix ave/grid compute does not calculate per-grid vector (../fix_ave_grid.cpp:156) Rama From: Steve Plimpton <sj...@gm...> Sent: Thursday, January 15, 2026 3:40 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...> Cc: Moore, Stan <st...@sa...>; spa...@li... Subject: EXT :Re: [sparta-users] Re: [EXTERNAL] compute lambda/grid Try these lines and I think it should work: compute 1 grid all species nrho compute 2 grid all all temp fix 1 ave/grid all 10 100 1000 c_1[*] fix 2 ave/grid all 10 100 1000 c_2 compute 3 lambda/grid f_1[*] f_2 lambda tau dump 1 grid all 1000 tmp.grid id c_3[*] The key point is do not use one compute grid command for per-species output with both nrho and temp as args. The issue giving the error on line 372 in your latest message is that the "nrho" argument to compute lambda/grid needs to have Nspecies # of columns in the fix and corresponding compute. If you include "temp" in the compute, then it will have 2*Nspecies columns. Also, the "temp" keyword is meant to be the ave temp over all species, so compute 2 should use "all" not "species" so that 1 value (per grid cell) is produced by compute 2. The latest doc page for compute lambda/grid needs to be updated to reflect this email and the current version of the code. Steve On Wed, Jan 7, 2026 at 7:42 PM Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> wrote: I tried a ‘fresh’ start case and that also failed. Rama From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Sent: Wednesday, January 7, 2026 6:26 PM To: Moore, Stan <st...@sa...<mailto:st...@sa...>>; spa...@li...<mailto:spa...@li...> Subject: EXT :Re: [sparta-users] [EXTERNAL] compute lambda/grid Hello Stan, I tried to restart using version 24Sep2025 Reading restart file ... restart file = 4 Sep 2024, SPARTA = 24 Sep 2025 I added the following lines to compute lambda: compute 1 grid all species nrho temp fix 1 ave/grid all 1 100 100 c_1[*] compute 2 lambda/grid f_1[*] f_1[2] lambda tau The code exited with the following message: ERROR: Compute lambda/grid nrho does not match system species count (../compute_lambda_grid.cpp:372) I am wondering whether the error is related to restart. Rama From: Moore, Stan <st...@sa...<mailto:st...@sa...>> Sent: Wednesday, January 7, 2026 12:39 PM To: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>>; spa...@li...<mailto:spa...@li...> Subject: EXT :Re: [EXTERNAL] [sparta-users] compute lambda/grid The syntax for compute lambda grid changed in the 20Jan2025 version of SPARTA, as noted here: https://github.com/sparta/sparta/releases. The 4-sep2024 version is getting pretty old and out of sync with the docs, please consider upgrading. Stan ________________________________ From: Ramakrishnan, Sekaripuram [US] (DS) (Contr) <Sek...@ng...<mailto:Sek...@ng...>> Sent: Wednesday, January 7, 2026 1:25 PM To: spa...@li...<mailto:spa...@li...> <spa...@li...<mailto:spa...@li...>> Subject: [EXTERNAL] [sparta-users] compute lambda/grid I copied the command for computing ‘lambda’ from the manual, modified frequency for the fix command, excluded ‘tau’ in ‘compute 2’ and ran my case. The version used was: sparta/4-sep2024 The run encountered the message shown below. [cid:image001.png@01DC9826.E12B6910] Is this a bug? Rama ama _______________________________________________ sparta-users mailing list spa...@li...<mailto:spa...@li...> https://lists.sourceforge.net/lists/listinfo/sparta-users |