gdalgorithms-list Mailing List for Game Dev Algorithms (Page 2)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Richard F. <ra...@gm...> - 2014-01-14 19:41:11
|
I was going to plug in the morten coded version. Code from ryg's blog. http://fgiesen.wordpress.com/2009/12/13/decoding-morton-codes/ On 14 January 2014 19:11, Chris Birke <cb...@gm...> wrote: > You might also look into morton order encoding, as it is a simple and > efficient locality preserving way to do lossless n dimensional tree > structures - this is of course best for applications like map data. Your > mileage may vary. > > -Chria > > ἀπὸ μηχανῆς iPhone 5 > > On Jan 14, 2014, at 9:01 AM, Oscar Forth <os...@tr...> > wrote: > > Pretty much. > > The space filling curve might not be necessary but you may be able to use > it to keep smaller gaps between the entries which are local to each other > spatially. > > > On 14 January 2014 16:52, Richard Fabian <ra...@gm...> wrote: > >> That sounds good. Something like the JPEG VLC for the actual distance >> between entries and just store deltas on a space filling curve should be >> good then. >> >> >> On 14 January 2014 13:11, Oscar Forth <os...@tr...>wrote: >> >>> Just a random though, so apologies if i'm talking about my arse, but ... >>> couldn't you just use something like the bit plane compression from Wavelet >>> Difference Reduction (WDR)? >>> >>> A lot depends on your data, I appreciate, so this may not work out as >>> any significant saving ... >>> >>> I wrote about the compression method in my blog a few years back: >>> >>> http://trueharmoniccolours.co.uk/Blog/?p=55 >>> >>> Obviously thats in the context of a full wavelet image compressor so you >>> will have to just grab the part I explain in the link above. >>> >>> Also apologies for the code formatting on my blog ... ill fix it one day >>> ;) >>> >>> Oscar >>> >>> On 14 January 2014 12:37, Richard Fabian <ra...@gm...> wrote: >>> >>>> The web is full of different solutions for compressing things, but some >>>> of you have probably already done this and found a really good ratio >>>> compression scheme that fits what I'm asking about. >>>> >>>> I've got a load of entites, (4-7k) that live on a grid (1024x1024), and >>>> I need to persist them to a savegame which gets sent across the network to >>>> keep the save safe. I need to compress it. I'm compressing the entities in >>>> the world already, ones that have been affected cost a few bytes, but the >>>> but for the ones that don't have any changes, all I'm left with is the >>>> coords. There are only a few that need the fatter compression, but at the >>>> numbers I'm looking at, those "unchanged entity" coords add up to a lot of >>>> data when storing as 20 bit structs. This is too much for sending over the >>>> network regularly as part of a save. >>>> >>>> I've had some ideas on how to compress the data, but they might be >>>> crap. I don't know. >>>> >>>> I can't easily regenerate the set so thought someone who wanted to >>>> compress consumables might know the name of a good solution for sparse >>>> bitset compression. The bits are clumped around different areas of the >>>> grid, so I feel that something that leverages spatial coherence might do >>>> well. >>>> >>>> Any leads would be highly appreciated. >>>> >>>> -- >>>> fabs(); >>>> "The fact that an opinion has been widely held is no evidence whatever >>>> that it is not utterly absurd." - Bertrand Russell >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >>>> Learn Why More Businesses Are Choosing CenturyLink Cloud For >>>> Critical Workloads, Development Environments & Everything In Between. >>>> Get a Quote or Start a Free Trial Today. >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> GDAlgorithms-list mailing list >>>> GDA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>> Archives: >>>> >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >>> Learn Why More Businesses Are Choosing CenturyLink Cloud For >>> Critical Workloads, Development Environments & Everything In Between. >>> Get a Quote or Start a Free Trial Today. >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> >> >> -- >> fabs(); >> "The fact that an opinion has been widely held is no evidence whatever >> that it is not utterly absurd." - Bertrand Russell >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd." - Bertrand Russell |
From: Chris B. <cb...@gm...> - 2014-01-14 19:11:47
|
You might also look into morton order encoding, as it is a simple and efficient locality preserving way to do lossless n dimensional tree structures - this is of course best for applications like map data. Your mileage may vary. -Chria ἀπὸ μηχανῆς iPhone 5 > On Jan 14, 2014, at 9:01 AM, Oscar Forth <os...@tr...> wrote: > > Pretty much. > > The space filling curve might not be necessary but you may be able to use it to keep smaller gaps between the entries which are local to each other spatially. > > >> On 14 January 2014 16:52, Richard Fabian <ra...@gm...> wrote: >> That sounds good. Something like the JPEG VLC for the actual distance between entries and just store deltas on a space filling curve should be good then. >> >> >>> On 14 January 2014 13:11, Oscar Forth <os...@tr...> wrote: >>> Just a random though, so apologies if i'm talking about my arse, but ... couldn't you just use something like the bit plane compression from Wavelet Difference Reduction (WDR)? >>> >>> A lot depends on your data, I appreciate, so this may not work out as any significant saving ... >>> >>> I wrote about the compression method in my blog a few years back: >>> >>> http://trueharmoniccolours.co.uk/Blog/?p=55 >>> >>> Obviously thats in the context of a full wavelet image compressor so you will have to just grab the part I explain in the link above. >>> >>> Also apologies for the code formatting on my blog ... ill fix it one day ;) >>> >>> Oscar >>> >>>> On 14 January 2014 12:37, Richard Fabian <ra...@gm...> wrote: >>>> The web is full of different solutions for compressing things, but some of you have probably already done this and found a really good ratio compression scheme that fits what I'm asking about. >>>> >>>> I've got a load of entites, (4-7k) that live on a grid (1024x1024), and I need to persist them to a savegame which gets sent across the network to keep the save safe. I need to compress it. I'm compressing the entities in the world already, ones that have been affected cost a few bytes, but the but for the ones that don't have any changes, all I'm left with is the coords. There are only a few that need the fatter compression, but at the numbers I'm looking at, those "unchanged entity" coords add up to a lot of data when storing as 20 bit structs. This is too much for sending over the network regularly as part of a save. >>>> >>>> I've had some ideas on how to compress the data, but they might be crap. I don't know. >>>> >>>> I can't easily regenerate the set so thought someone who wanted to compress consumables might know the name of a good solution for sparse bitset compression. The bits are clumped around different areas of the grid, so I feel that something that leverages spatial coherence might do well. >>>> >>>> Any leads would be highly appreciated. >>>> >>>> -- >>>> fabs(); >>>> "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd." - Bertrand Russell >>>> >>>> ------------------------------------------------------------------------------ >>>> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >>>> Learn Why More Businesses Are Choosing CenturyLink Cloud For >>>> Critical Workloads, Development Environments & Everything In Between. >>>> Get a Quote or Start a Free Trial Today. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> GDAlgorithms-list mailing list >>>> GDA...@li... >>>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>>> Archives: >>>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >>> >>> ------------------------------------------------------------------------------ >>> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >>> Learn Why More Businesses Are Choosing CenturyLink Cloud For >>> Critical Workloads, Development Environments & Everything In Between. >>> Get a Quote or Start a Free Trial Today. >>> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> >> >> >> -- >> fabs(); >> "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd." - Bertrand Russell >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Oscar F. <os...@tr...> - 2014-01-14 17:01:19
|
Pretty much. The space filling curve might not be necessary but you may be able to use it to keep smaller gaps between the entries which are local to each other spatially. On 14 January 2014 16:52, Richard Fabian <ra...@gm...> wrote: > That sounds good. Something like the JPEG VLC for the actual distance > between entries and just store deltas on a space filling curve should be > good then. > > > On 14 January 2014 13:11, Oscar Forth <os...@tr...>wrote: > >> Just a random though, so apologies if i'm talking about my arse, but ... >> couldn't you just use something like the bit plane compression from Wavelet >> Difference Reduction (WDR)? >> >> A lot depends on your data, I appreciate, so this may not work out as any >> significant saving ... >> >> I wrote about the compression method in my blog a few years back: >> >> http://trueharmoniccolours.co.uk/Blog/?p=55 >> >> Obviously thats in the context of a full wavelet image compressor so you >> will have to just grab the part I explain in the link above. >> >> Also apologies for the code formatting on my blog ... ill fix it one day >> ;) >> >> Oscar >> >> On 14 January 2014 12:37, Richard Fabian <ra...@gm...> wrote: >> >>> The web is full of different solutions for compressing things, but some >>> of you have probably already done this and found a really good ratio >>> compression scheme that fits what I'm asking about. >>> >>> I've got a load of entites, (4-7k) that live on a grid (1024x1024), and >>> I need to persist them to a savegame which gets sent across the network to >>> keep the save safe. I need to compress it. I'm compressing the entities in >>> the world already, ones that have been affected cost a few bytes, but the >>> but for the ones that don't have any changes, all I'm left with is the >>> coords. There are only a few that need the fatter compression, but at the >>> numbers I'm looking at, those "unchanged entity" coords add up to a lot of >>> data when storing as 20 bit structs. This is too much for sending over the >>> network regularly as part of a save. >>> >>> I've had some ideas on how to compress the data, but they might be crap. >>> I don't know. >>> >>> I can't easily regenerate the set so thought someone who wanted to >>> compress consumables might know the name of a good solution for sparse >>> bitset compression. The bits are clumped around different areas of the >>> grid, so I feel that something that leverages spatial coherence might do >>> well. >>> >>> Any leads would be highly appreciated. >>> >>> -- >>> fabs(); >>> "The fact that an opinion has been widely held is no evidence whatever >>> that it is not utterly absurd." - Bertrand Russell >>> >>> >>> ------------------------------------------------------------------------------ >>> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >>> Learn Why More Businesses Are Choosing CenturyLink Cloud For >>> Critical Workloads, Development Environments & Everything In Between. >>> Get a Quote or Start a Free Trial Today. >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> GDAlgorithms-list mailing list >>> GDA...@li... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >>> >> >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > fabs(); > "The fact that an opinion has been widely held is no evidence whatever > that it is not utterly absurd." - Bertrand Russell > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Richard F. <ra...@gm...> - 2014-01-14 16:52:43
|
That sounds good. Something like the JPEG VLC for the actual distance between entries and just store deltas on a space filling curve should be good then. On 14 January 2014 13:11, Oscar Forth <os...@tr...>wrote: > Just a random though, so apologies if i'm talking about my arse, but ... > couldn't you just use something like the bit plane compression from Wavelet > Difference Reduction (WDR)? > > A lot depends on your data, I appreciate, so this may not work out as any > significant saving ... > > I wrote about the compression method in my blog a few years back: > > http://trueharmoniccolours.co.uk/Blog/?p=55 > > Obviously thats in the context of a full wavelet image compressor so you > will have to just grab the part I explain in the link above. > > Also apologies for the code formatting on my blog ... ill fix it one day ;) > > Oscar > > On 14 January 2014 12:37, Richard Fabian <ra...@gm...> wrote: > >> The web is full of different solutions for compressing things, but some >> of you have probably already done this and found a really good ratio >> compression scheme that fits what I'm asking about. >> >> I've got a load of entites, (4-7k) that live on a grid (1024x1024), and I >> need to persist them to a savegame which gets sent across the network to >> keep the save safe. I need to compress it. I'm compressing the entities in >> the world already, ones that have been affected cost a few bytes, but the >> but for the ones that don't have any changes, all I'm left with is the >> coords. There are only a few that need the fatter compression, but at the >> numbers I'm looking at, those "unchanged entity" coords add up to a lot of >> data when storing as 20 bit structs. This is too much for sending over the >> network regularly as part of a save. >> >> I've had some ideas on how to compress the data, but they might be crap. >> I don't know. >> >> I can't easily regenerate the set so thought someone who wanted to >> compress consumables might know the name of a good solution for sparse >> bitset compression. The bits are clumped around different areas of the >> grid, so I feel that something that leverages spatial coherence might do >> well. >> >> Any leads would be highly appreciated. >> >> -- >> fabs(); >> "The fact that an opinion has been widely held is no evidence whatever >> that it is not utterly absurd." - Bertrand Russell >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd." - Bertrand Russell |
From: Oscar F. <os...@tr...> - 2014-01-14 13:11:13
|
Just a random though, so apologies if i'm talking about my arse, but ... couldn't you just use something like the bit plane compression from Wavelet Difference Reduction (WDR)? A lot depends on your data, I appreciate, so this may not work out as any significant saving ... I wrote about the compression method in my blog a few years back: http://trueharmoniccolours.co.uk/Blog/?p=55 Obviously thats in the context of a full wavelet image compressor so you will have to just grab the part I explain in the link above. Also apologies for the code formatting on my blog ... ill fix it one day ;) Oscar On 14 January 2014 12:37, Richard Fabian <ra...@gm...> wrote: > The web is full of different solutions for compressing things, but some of > you have probably already done this and found a really good ratio > compression scheme that fits what I'm asking about. > > I've got a load of entites, (4-7k) that live on a grid (1024x1024), and I > need to persist them to a savegame which gets sent across the network to > keep the save safe. I need to compress it. I'm compressing the entities in > the world already, ones that have been affected cost a few bytes, but the > but for the ones that don't have any changes, all I'm left with is the > coords. There are only a few that need the fatter compression, but at the > numbers I'm looking at, those "unchanged entity" coords add up to a lot of > data when storing as 20 bit structs. This is too much for sending over the > network regularly as part of a save. > > I've had some ideas on how to compress the data, but they might be crap. I > don't know. > > I can't easily regenerate the set so thought someone who wanted to > compress consumables might know the name of a good solution for sparse > bitset compression. The bits are clumped around different areas of the > grid, so I feel that something that leverages spatial coherence might do > well. > > Any leads would be highly appreciated. > > -- > fabs(); > "The fact that an opinion has been widely held is no evidence whatever > that it is not utterly absurd." - Bertrand Russell > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Richard F. <ra...@gm...> - 2014-01-14 12:38:22
|
The web is full of different solutions for compressing things, but some of you have probably already done this and found a really good ratio compression scheme that fits what I'm asking about. I've got a load of entites, (4-7k) that live on a grid (1024x1024), and I need to persist them to a savegame which gets sent across the network to keep the save safe. I need to compress it. I'm compressing the entities in the world already, ones that have been affected cost a few bytes, but the but for the ones that don't have any changes, all I'm left with is the coords. There are only a few that need the fatter compression, but at the numbers I'm looking at, those "unchanged entity" coords add up to a lot of data when storing as 20 bit structs. This is too much for sending over the network regularly as part of a save. I've had some ideas on how to compress the data, but they might be crap. I don't know. I can't easily regenerate the set so thought someone who wanted to compress consumables might know the name of a good solution for sparse bitset compression. The bits are clumped around different areas of the grid, so I feel that something that leverages spatial coherence might do well. Any leads would be highly appreciated. -- fabs(); "The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd." - Bertrand Russell |
From: Sebastian S. <seb...@gm...> - 2013-07-23 02:35:06
|
On Mon, Jul 22, 2013 at 7:24 PM, Sebastian Sylvan < seb...@gm...> wrote: > > > On Mon, Jul 22, 2013 at 5:21 PM, Josh Green <in...@gm...> wrote: > >> >> Unfortunately the above algorithm looks like it'd be very serial, and >> involve a bunch of sorts... But I think it might acheive good results? >> > > Seems reasonable. You should of course try to keep your texture requests > in a priority queue instead of re-sorting in each iteration, and use > "adjust key"/"decrease key" whenever you decide to drop a MIP level for a > key. There are heap implementation where this operation is O(1) (e.g. > Fibonacci heap), so it's at least possible to be efficient (in practice, > depending on the number of textures, a straight binary heap a la STL may be > faster due to being less complicated). > > Also, it's worth considering your expectation here. If you intend to keep enough memory for textures that you'll "almost never" have to turn down a MIP-loading request, you don't really have to over-think this too much. Simply loading the lowest-resolution MIP always (taking object visibility into account) will do a decent job - it's sort of like having a global resolution clamp and just lowering it until stuff fits in memory, which won't be optimal, but tends to behave consistently and predictably. > -- > Sebastian Sylvan > |
From: Sebastian S. <seb...@gm...> - 2013-07-23 02:25:07
|
On Mon, Jul 22, 2013 at 5:21 PM, Josh Green <in...@gm...> wrote: > > Unfortunately the above algorithm looks like it'd be very serial, and > involve a bunch of sorts... But I think it might acheive good results? > Seems reasonable. You should of course try to keep your texture requests in a priority queue instead of re-sorting in each iteration, and use "adjust key"/"decrease key" whenever you decide to drop a MIP level for a key. There are heap implementation where this operation is O(1) (e.g. Fibonacci heap), so it's at least possible to be efficient (in practice, depending on the number of textures, a straight binary heap a la STL may be faster due to being less complicated). Seb -- Sebastian Sylvan |
From: Jamie F. <ja...@qu...> - 2013-07-23 01:18:05
|
On 23/07/2013 01:21, Josh Green wrote: <snip> > Have any of you seen these implementations executed in a product or game > before? Was it successful? Tricky? worth the time? We've seen sorting costs for texture prioritization be surprisingly high, without doing anything that complex, particularly as you're often moving around, priorities change, and you haven't even finished loading the mip / texture you wanted to load in the first place (depending on hardware, etc.) by the time it's no longer the most important. Our experience has been that it's not worth over-thinking it. Prioritize higher mips close to your points of interest (usually cameras), but don't keep too big a queue for low priority things, unless you like doing lots of sorting. Once you've started loading something, finish loading it; yes, you might bin it soon, but you need some hysteresis in the system. Large mips generally are worth seeking and loading on their own; small mips may be worth generating from the smallest mip you think is worth loading. For our tech, prioritization is configurable via plug-ins, so our users can get in and prioritize whatever they think is most important, but it's usually just whatever's near the camera (or is expected to be near the camera soon). You can do a bit better job if you know something about texture density from a pre-process, but it's not necessary, particularly if things are built with a reasonably consistent texture density. So I'd build something simple, and see how well it works. Almost never a bad way to go about it :) Jamie > Thanks for the comments and suggestions so far, > > Josh |
From: Josh G. <in...@gm...> - 2013-07-23 00:21:21
|
Thanks for your thoughts, >Alternatively - object X is Y meters from camera - what should I load. This is essentially what my current calculation gives me. It describes what I need to load in order to render the seen at it's highest image quality. The problem though is then applying a memory constraint to that, that says, I can't have everything I need.... So what will I choose to load? I must admit, I am currently thinking down the line that Sebastian described above: >Assuming you only load one MIP level at a time (i.e. if MIP N is currently loaded, you'll only consider N-1), maybe something like: >(desired_mip - current_mip) * num_pixels / memory_usage, add tuning knobs as needed... I have the concept of the "Ideal" mip I would like loaded. And I also think dealing with one mip at a time (from prioritisation point of view at least) makes sense. Though that doesn't necessarily mean only one mip is loaded at a time, just one mip is considered for being loaded at a time. I'm thinking if I start with my "target set" as equal to the "ideal set", then execute the following: while (memoryUsage of target set > budget) { sort target texture set according to the "Priority" of having it's current target mip loaded. where priority = (idealMip - currentTargetMip) * numPixelsUsingTexture / MemoryUsage Select texture with the lowest priority subtract 1 from it's currentTargetMip. } This would remove all the mips that will have the least benifit to the scene. Then I would task the resource system with making sure the target set gets loaded. Unfortunately the above algorithm looks like it'd be very serial, and involve a bunch of sorts... But I think it might acheive good results? Have any of you seen these implementations executed in a product or game before? Was it successful? Tricky? worth the time? Thanks for the comments and suggestions so far, Josh |
From: Sebastian S. <seb...@gm...> - 2013-07-22 23:34:55
|
On Mon, Jul 22, 2013 at 2:26 PM, Krzysztof Narkowicz <k.n...@gm...>wrote: A good solution would be just to precompute some data. For example camera > is at point X - what should I load. Alternatively - object X is Y meters > from camera - what should I load. Without precomputation it's hard to know > what will be needed in nearest time. Current frame usually doesn't contain > enough information. > > Another idea for utilizing precomputation: for each 'cell' in your precomputation structure, render a bunch of sample viewpoints at full resolution, all the shaders, fog, DOF, post processing etc. turned on, with *all* MIPs loaded, then remove one MIP level at a time for each texture and measure the amount of additional error doing so caused in the sample frames. Now you have a per-cell ordered list of (texture,mip) pairs ordered in terms of importance for overall scene error. Ideally use some perceptual metric for your error function. |
From: Krzysztof N. <k.n...@gm...> - 2013-07-22 21:26:34
|
Hi, A good solution would be just to precompute some data. For example camera is at point X - what should I load. Alternatively - object X is Y meters from camera - what should I load. Without precomputation it's hard to know what will be needed in nearest time. Current frame usually doesn't contain enough information. BTW isn't loading single mip-maps too fine grained, at least from seek time perspective? Most popular solutions are based on two states - ~64x64 thumb / full texture. -- Krzysztof Narkowicz On Mon, Jul 22, 2013 at 9:16 AM, Josh Green <in...@gm...> wrote: > Hi There, > > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > > Problems that arise with the above algorithm are quite broad, and in > particular, optimising the tipping point between cost and benifit would be > necessary. Also, calculating the number of pixels affected would involve > crude calculations that aren't really going to affect real world values.... > > Thanks all for your thoughts! > > Josh > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Sebastian S. <seb...@gm...> - 2013-07-22 20:55:30
|
On Mon, Jul 22, 2013 at 12:16 AM, Josh Green <in...@gm...> wrote: > Hi There, > > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > I would bake in some kind of factor that relates to "quality improvement" as well. If you have a 4k eye ball texture and it's currently at MIP 2 (1Kx1K), it probably won't gain much from loading another MIP even if it's covering a lot of pixels. So you need to compute how far off the pixel currently is from the *ideal* MIP level, and improve pixels that are far off first (weighted by pixel count). Assuming you only load one MIP level at a time (i.e. if MIP N is currently loaded, you'll only consider N-1), maybe something like: (desired_mip - current_mip) * num_pixels / memory_usage, add tuning knobs as needed... That first factor relates to how far "off" a pixel currently is from its ideal MIP level, i.e. how much in need of improvement those pixels are. -- Sebastian Sylvan |
From: Manuel M. <m.m...@wa...> - 2013-07-22 08:12:18
|
Hi Josh, > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > > Problems that arise with the above algorithm are quite broad, and in > particular, optimising the tipping point between cost and benifit would be > necessary. Also, calculating the number of pixels affected would involve > crude calculations that aren't really going to affect real world values.... I think it might boil down to the question if it is possible to implement efficient histograms on the GPU (e.g. via http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.40), keyed on mip-map level and texture ID. Doing a low-res histogram pass should give you a very good indication on the number of affected pixels. cheers, Manuel |
From: Josh G. <in...@gm...> - 2013-07-22 07:16:22
|
Hi There, I'm currently working on a texture streaming system and looking into ways to prioritize which mips on which textures should be loaded. I was hoping that people on this list may have experience with these algorithms that they could share? Currently I know which mips need to be loaded in order to render a scene without degradation in image quality. I would now like to apply a memory budget to textures, and make decisions about which mips of which textures should be loaded over others. My current line of thinking involves comparing cost / benefit of loading each individual mip. i.e. How many pixels would be affected by a particular mip level being loaded versus how much memory would that mip level use? This becomes Priority = Benefit / Cost I'd then sort for highest priority and assign memory budget to those mips at the top of the list. Any thoughts? comments? Alternatives that work better? Alternatives that have worked enough? Problems that arise with the above algorithm are quite broad, and in particular, optimising the tipping point between cost and benifit would be necessary. Also, calculating the number of pixels affected would involve crude calculations that aren't really going to affect real world values.... Thanks all for your thoughts! Josh |
From: Jon W. <jw...@gm...> - 2013-04-30 16:34:53
|
This is a little late, but better late than never :-) Did you get a reasonable solution? Do you have any images to show? The way I understand it, those functions are both spatial domain functions. The first one is the "sinc()" function, and the second one is the "cosine window" function from sampling theory. Also, when you use the sinc function, you typically want to either cut off at a zero crossing (multiple of pi) or further window the sinc() function, typically with a cosine window. Finally, to adjust the cosine window, you can raise it to some power for a trade-off between "preserving detail" and "removing aliasing"; known as a "raised cosine window function." Sincerely, jw Sincerely, Jon Watte -- "I pledge allegiance to the flag of the United States of America, and to the republic for which it stands, one nation indivisible, with liberty and justice for all." ~ Adopted by U.S. Congress, June 22, 1942 On Fri, Mar 1, 2013 at 1:56 AM, Alen Ladavac <ale...@cr...> wrote: > Hi all, > > I was revisiting some of our old shader code (order 2, i.e. > 9-coefficients) trying to fix ringing in some high contrast environments. I > am looking at Stupid Spherical Harmonics (SH) Tricks by Peter-Pike Sloan, > the section about Windowing. > > The Lanczos and Han functions mentioned are of form: > > sin(pi*x/w)/(x/w) > > and > > (1+cos(pi*x/w))/2 > > respectively, for a given filtering window of width "w". The rest of text, > as far as I can see, seems to imply that w should be equal to the order of > SH used. (w=6 for 6th order). > > Initially, I would expecting filtering to be done by convolution in > spatial domain, by projecting the filter function to the SH basis and then > multiplying its y[l,0] coefficients into appropriate bands (y[l,m]) of the > filtered function. However, all the other notions in there seem to point to > this actually being done as multiplication in frequency domain. More > specifically, as if the filtering function as presented here above _already > is_ the frequency domain presentation. So it would just multiply of y(l,m) > coefficients of the filtered function by values of the filter evaluated at > integers. > > This looks rather unusual to me, but I guess there is some reasoning > behind all that. > > As a side note, I got some ok-ish results with filtering in spatial domain > (aka. bluring) using Han function as above (*) with value w=1.5 through > convolution as described above. (Where the number w=1.5 was "tuned" by > manual binary search with visual assessment of results to minimize visible > ringing without introducing "too much" blur.) While on the other hand, > using the multiplication directly in frequency domain looked way too blurry > for w=3 and was ringing too much if I tried increasing the number (as with > multiplication in the frequency domain wider "window" leads to less blur, > while it opposite happens in the spatial domain.) > > I would appreciate if anyone can shed a bit of light onto this. > > Thanks a lot in advance, > Alen > > (*) That form of the function does look strange for spatial domain but 1.5 > is an arbitrary "hand-tuned" factor anyway. > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Konstantin M. <co...@ix...> - 2013-04-12 08:08:36
|
Hi folks, Comodo Code Signing CA2 certificate could be used for code signing only. just run certmgr.msc and look at properties. Comodo cert is signed by UTN_USERFirst cert. the later one does not support kernel mode code signing as well. and only the parental AddTrust Ext CA Root cert has kmcs bit enabled. So, COMODO does not provide certs for kmcs (sys drivers). AFAIK the only CAs provide kmcs are Symantec/VeriSign and GlobalSign. and their partners/franchise of course. -km ----- Original Message ----- From: "Robert Walker" <fo...@ro...> To: <GDA...@li...> Sent: Friday, March 22, 2013 3:24 PM Subject: Re: [Algorithms] Timing problems > Hi Jan, > > I've got a Comodo code signing certificate renewed every year for my > application > installer signing. > > Just tested it on one of the .sys files for 0.A.D. and it signed it > successfully > and it says it is digitally signed when I right click on properties. > > Not sure how to test to see if the certificate I used is enough to let it > be > installed okay. > > I'm running Windows 7 64 bit here though, so ideal for testing if it does > install okay if you can tell me what to do to test that. > > Presumably once installed then the application doesn't need to be run with > admin > privileges to access it? > > Yes it is absolutely fine if it is read only for apps like mine that only > need > to read the HPET counter. > > Yes I'm happy to work together with you on this if it is something I have > the > ability to do and if I can find the time to do it. If it is something that > can > be done in a few days of work then I can surely find the time for it one > way or > another. Longer projects can be harder to find time for. > > I have no experience of writing drivers though, also don't use c++, I > write in > low level Windows C most of the time. But if it is just a case of removing > functionality and recompiling maybe it is something I can do? > > I use Visual C++ 6.0 for my programming which is probably not new enough > for > this? I use the Visual Studio family of free compilers as well, but only > for > debugging from time to time to take advantage of a few debug capabilities > they > have that aren't in Visual C++ 6.0. > > Simplest though is just to code sign the existing version if that works, > for > now. I don't really see how it is going to be a risk because to access the > driver - wouldn't an application have to be installed on your computer > anyway? > Which means that the user has given it admin permissions to install > whatever it > likes anyway, and if it does something malicious, surely it would do it > directly > and e.g. install a known virus or spyware, rather than try to access a > driver to > address the HPET counter? Or am I missing something there? > > Though advantage if there is one less thing for the user to have to accept > during install. > > Also, how do you use it? I've never installed a driver or interfaced > directly > with a driver. Is there example code in the zip? > > Thanks, > > Robert > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: metanet s. <met...@ya...> - 2013-03-30 23:36:06
|
http://www.simamex.com/gpi/mgs.kitb?zzw metanet software 3/31/2013 12:35:58 AM xj 3/31/2013 12:35:58 AM metanet software |
From: Jan W. <jan...@gm...> - 2013-03-23 07:10:40
|
We have veered outside the scope of [gd-algorithms] - I will reply directly; anyone interested in a reduced, ideally WHQL driver capable of accessing HPET and performance counters, please join the conversation by writing us. Have you gone through the WHQL experience before? Are there any pitfalls to avoid? Would love to hear from you. |
From: Robert W. <fo...@ro...> - 2013-03-23 01:22:09
|
Hi Jan, Sorry hope you don't mind me posting again before you reply. The thing is, I've just had a go at installing it - I found your install_aken.bat. Got this message: "To install the driver, please first enable test mode: C:\Users\Robert\AppData\Local\0AD~1.ALP\binaries\system\INSTAL~1.BAT enabletest (This is necessary because Vista/Win7 x64 require signing with a Microsoft "cross certificate". The Fraunhofer code signing certificate is not enough, even though its chain of trust is impeccable. Going the WHQL route, perhaps as an "unclassified" driver, might work. see http://www.freeotfe.org/docs/Main/impact_of_kernel_driver_signing.htm ) Then reboot (!) and install the driver: C:\Users\Robert\AppData\Local\0AD~1.ALP\binaries\system\INSTAL~1.BAT install ["p ath_to_directory_containing_aken*.sys"] (If no path is given, we will use the directory of this batch file) To remove the driver and disable test mode, execute the following: C:\Users\Robert\AppData\Local\0AD~1.ALP\binaries\system\INSTAL~1.BAT remove C:\Users\Robert\AppData\Local\0AD~1.ALP\binaries\system\INSTAL~1.BAT disabletest Press any key to continue . . ." So - it rather looks as if my code signing certificate is no good for this, as I thought might be a possibility. I expect it is necessary to use a Verisign certificate. I was told by someone at Microsoft support that it costs $99 per year for one of those, so that's the same cost as the certificate I already use - but I've already bought my certificate for the next year. Still I could consider it if it solves the problem and if there is no other solution. ALSO HAD AN IDEA ABOUT AN EASY WAY TO MAKE THE DRIVER READ ONLY. The idea is - there is no need to remove huge chunks of the driver - I can well understand it might take thought to do that without introducing new bugs. You could instead just #ifdef out the routines that write to the mapped hardware resources. I get the impression from your desciprion that probably, that would involve very few actual lines of code. I imagine would be things like #ifdefs to remove the lines __writemsr(..) and in ZwOpenSection to replace OPEN_ALL_ACCESS by SECTION_MAP_READ that sort of thing. So - then the driver has all your other code for writing as before, but if some malware tries to use it to write to memory, it just "goes through the motions" but doesn't actually do anything. In the documentation you make clear that the driver is only to be used for read operations. Then from your other remarks, presumably if the driver doesn't call any actual routines that write to hardware resources, then it mightn't need signing at all at that point? Is that true? Just a thought, Robert |
From: Robert W. <fo...@ro...> - 2013-03-22 11:25:05
|
Hi Jan, I've got a Comodo code signing certificate renewed every year for my application installer signing. Just tested it on one of the .sys files for 0.A.D. and it signed it successfully and it says it is digitally signed when I right click on properties. Not sure how to test to see if the certificate I used is enough to let it be installed okay. I'm running Windows 7 64 bit here though, so ideal for testing if it does install okay if you can tell me what to do to test that. Presumably once installed then the application doesn't need to be run with admin privileges to access it? Yes it is absolutely fine if it is read only for apps like mine that only need to read the HPET counter. Yes I'm happy to work together with you on this if it is something I have the ability to do and if I can find the time to do it. If it is something that can be done in a few days of work then I can surely find the time for it one way or another. Longer projects can be harder to find time for. I have no experience of writing drivers though, also don't use c++, I write in low level Windows C most of the time. But if it is just a case of removing functionality and recompiling maybe it is something I can do? I use Visual C++ 6.0 for my programming which is probably not new enough for this? I use the Visual Studio family of free compilers as well, but only for debugging from time to time to take advantage of a few debug capabilities they have that aren't in Visual C++ 6.0. Simplest though is just to code sign the existing version if that works, for now. I don't really see how it is going to be a risk because to access the driver - wouldn't an application have to be installed on your computer anyway? Which means that the user has given it admin permissions to install whatever it likes anyway, and if it does something malicious, surely it would do it directly and e.g. install a known virus or spyware, rather than try to access a driver to address the HPET counter? Or am I missing something there? Though advantage if there is one less thing for the user to have to accept during install. Also, how do you use it? I've never installed a driver or interfaced directly with a driver. Is there example code in the zip? Thanks, Robert |
From: Jan W. <jan...@gm...> - 2013-03-22 03:19:07
|
Hi Robert, this driver is indeed still available. You can always check http://trac.wildfiregames.com/browser/ps/trunk/source/lib/sysdep/os/win/aken and I will investigate tonight where the zip file went. Unfortunately there has been no progress in signing it - that is essential for it to be usable by non-developers. Is there anyone who can help? The difficulty is that it provides functions to map physical memory (HPET is memory-mapped) and write model-specific registers, which raises security concerns. Perhaps we can change the interface to provide only 'safe' functionality, such as reading HPET and performance counters, instead of allowing apps to do that and more. Anyone willing and able to work together with me on this? |
From: Robert W. <fo...@ro...> - 2013-03-22 02:59:59
|
Hi Jan, I wondered, is this driver still available by any chance? I'm trying to access the HPET timer in my software in XP or in later versions of Windows where it may be enabled in the BIOS but seems it is disabled by default in the OS. With the later versions of Windows I could instruct my users to enable it in Windows using bcdhedit but the problem with that is that in the online forums a few people report performance issues and possibly even freezes with it enabled (which may be why the default is for it to be disabled). In XP there seems to be no way to access it normally. It looks as if your driver might be a possible solution. But get 404 not found when I try to download the zip. This is for a software metronome. Thanks, Robert Walker |
From: Alen L. <ale...@cr...> - 2013-03-01 20:47:16
|
Hi all, I was revisiting some of our old shader code (order 2, i.e. 9-coefficients) trying to fix ringing in some high contrast environments. I am looking at Stupid Spherical Harmonics (SH) Tricks by Peter-Pike Sloan, the section about Windowing. The Lanczos and Han functions mentioned are of form: sin(pi*x/w)/(x/w) and (1+cos(pi*x/w))/2 respectively, for a given filtering window of width "w". The rest of text, as far as I can see, seems to imply that w should be equal to the order of SH used. (w=6 for 6th order). Initially, I would expecting filtering to be done by convolution in spatial domain, by projecting the filter function to the SH basis and then multiplying its y[l,0] coefficients into appropriate bands (y[l,m]) of the filtered function. However, all the other notions in there seem to point to this actually being done as multiplication in frequency domain. More specifically, as if the filtering function as presented here above _already is_ the frequency domain presentation. So it would just multiply of y(l,m) coefficients of the filtered function by values of the filter evaluated at integers. This looks rather unusual to me, but I guess there is some reasoning behind all that. As a side note, I got some ok-ish results with filtering in spatial domain (aka. bluring) using Han function as above (*) with value w=1.5 through convolution as described above. (Where the number w=1.5 was "tuned" by manual binary search with visual assessment of results to minimize visible ringing without introducing "too much" blur.) While on the other hand, using the multiplication directly in frequency domain looked way too blurry for w=3 and was ringing too much if I tried increasing the number (as with multiplication in the frequency domain wider "window" leads to less blur, while it opposite happens in the spatial domain.) I would appreciate if anyone can shed a bit of light onto this. Thanks a lot in advance, Alen (*) That form of the function does look strange for spatial domain but 1.5 is an arbitrary "hand-tuned" factor anyway. |
From: metanet s. <met...@ya...> - 2013-02-21 09:56:45
|
jzhttp://www.minthardwoodflooring.com/kjqotmo/d5g15l9kctqs9.8hijx?80h7c649cz69ifv00u4oqzekako9 m .dschtyuqwzhurfsapfsfoloemfspf<span style="color: rgb(255, 255, |