|
From: samclemmens <geo...@gm...> - 2010-06-08 15:33:20
|
Hi, I'm relatively new to GeoWebCache...and I'm a bit puzzled by the directory/file name structure that is produced. I have a few questions, namely: 1. The documentation states that the naming convention is: layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension What is the z value? 2. Given the sample output .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that z=12, x=848, and y=2540. Obviously the above formula won't produce 006_019 based on these numbers. What conversion is required? 3. Is it possible to change the directory/file name format? I'd like to create a data set that can be used by Ordnance Survey OpenSpace, which uses a different structure based on the British National Grid. Cheers, Peter -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28819540.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: Arne K. <ar...@ti...> - 2010-06-08 22:15:09
|
1) "zoom level". For the default EPSG:900913 grid z=0 means the whole world in 4 tiles 2) Hm,,, good point. I dont think I went back and updated that wiki page, it was not more than a scratchpad for sharing ideas. The actual code is here[1], the difference is that the bit shifting starts with 2, effectively making it 2^( 1 + ( z / 2 )). And there's some zero-padding. 3) The catch is that you need to know a little bit of Java, but if you do then it's potentially straightforward. It depends what the OpenSpace structure is based on, and whether we have similar tokens in GWC. blobstore/file contains an implementation of BlobStore. You can copy the whole thing to a new package, and then make the FilePathGenerator output the structure you want. It is linked into the application using Spring, WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I would guess that you also want a particular GridSet to go with that, to match the resolutions and origin used by OpenSpace. It's possible someone has already done it, but so far I have not heard about it. -Arne 1: http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java On 06/08/2010 05:33 PM, samclemmens wrote: > Hi, > > I'm relatively new to GeoWebCache...and I'm a bit puzzled by the > directory/file name structure that is produced. I have a few questions, > namely: > > 1. The documentation states that the naming convention is: > layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension What is > the z value? > > 2. Given the sample output > .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that z=12, > x=848, and y=2540. Obviously the above formula won't produce 006_019 based > on these numbers. What conversion is required? > > 3. Is it possible to change the directory/file name format? I'd like to > create a data set that can be used by Ordnance Survey OpenSpace, which uses > a different structure based on the British National Grid. > > Cheers, > > Peter > |
|
From: samclemmens <geo...@gm...> - 2010-06-10 09:22:14
|
Cheers, Arne! I'm curious; what was the rationale for creating a directory structure with just one level? We processed a data set earlier this week, which resulted in 400,000 directories. When I tried to an "ls", my PuTTY console froze... Peter Arne Kepp-2 wrote: > > 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole > world in 4 tiles > > 2) Hm,,, good point. I dont think I went back and updated that wiki > page, it was not more than a scratchpad for sharing ideas. The actual > code is here[1], the difference is that the bit shifting starts with 2, > effectively making it 2^( 1 + ( z / 2 )). And there's some zero-padding. > > 3) The catch is that you need to know a little bit of Java, but if you > do then it's potentially straightforward. It depends what the OpenSpace > structure is based on, and whether we have similar tokens in GWC. > > blobstore/file contains an implementation of BlobStore. You can copy the > whole thing to a new package, and then make the FilePathGenerator output > the structure you want. It is linked into the application using Spring, > WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I > would guess that you also want a particular GridSet to go with that, to > match the resolutions and origin used by OpenSpace. > > It's possible someone has already done it, but so far I have not heard > about it. > > -Arne > > 1: > http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java > > > On 06/08/2010 05:33 PM, samclemmens wrote: >> Hi, >> >> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >> directory/file name structure that is produced. I have a few questions, >> namely: >> >> 1. The documentation states that the naming convention is: >> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension What >> is >> the z value? >> >> 2. Given the sample output >> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that z=12, >> x=848, and y=2540. Obviously the above formula won't produce 006_019 >> based >> on these numbers. What conversion is required? >> >> 3. Is it possible to change the directory/file name format? I'd like to >> create a data set that can be used by Ordnance Survey OpenSpace, which >> uses >> a different structure based on the British National Grid. >> >> Cheers, >> >> Peter >> > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Geowebcache-users mailing list > Geo...@li... > https://lists.sourceforge.net/lists/listinfo/geowebcache-users > > -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28840704.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: Arne K. <ar...@ti...> - 2010-06-10 10:30:28
|
It's a compromise, and this part of the code is definitely made easily pluggable so that one can use a different structures for special needs. I was expecting that we would have a few different implementations by now, but apparently not. Every directory level requires a node to be looked up (move the disk head), so too many directories is not good either. Within a directory there is a proper index. Listing the content of the cache is generally not recommended, I don't know of a single filesystem that is optimized for that and most utilities can't handle something of this size. "find" is usually ok. But 400 000 directories.... Unless you've discovered a bug, you should have at least 80 billion tiles? I wouldn't use a conventional filesystem for that, and on a fast machine (100 tiles per second) it would take 25 years to build a cache of that size. -Arne On 06/10/2010 11:22 AM, samclemmens wrote: > Cheers, Arne! > > I'm curious; what was the rationale for creating a directory structure with > just one level? We processed a data set earlier this week, which resulted > in 400,000 directories. When I tried to an "ls", my PuTTY console froze... > > Peter > > > > Arne Kepp-2 wrote: > >> 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole >> world in 4 tiles >> >> 2) Hm,,, good point. I dont think I went back and updated that wiki >> page, it was not more than a scratchpad for sharing ideas. The actual >> code is here[1], the difference is that the bit shifting starts with 2, >> effectively making it 2^( 1 + ( z / 2 )). And there's some zero-padding. >> >> 3) The catch is that you need to know a little bit of Java, but if you >> do then it's potentially straightforward. It depends what the OpenSpace >> structure is based on, and whether we have similar tokens in GWC. >> >> blobstore/file contains an implementation of BlobStore. You can copy the >> whole thing to a new package, and then make the FilePathGenerator output >> the structure you want. It is linked into the application using Spring, >> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I >> would guess that you also want a particular GridSet to go with that, to >> match the resolutions and origin used by OpenSpace. >> >> It's possible someone has already done it, but so far I have not heard >> about it. >> >> -Arne >> >> 1: >> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >> >> >> On 06/08/2010 05:33 PM, samclemmens wrote: >> >>> Hi, >>> >>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>> directory/file name structure that is produced. I have a few questions, >>> namely: >>> >>> 1. The documentation states that the naming convention is: >>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension What >>> is >>> the z value? >>> >>> 2. Given the sample output >>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that z=12, >>> x=848, and y=2540. Obviously the above formula won't produce 006_019 >>> based >>> on these numbers. What conversion is required? >>> >>> 3. Is it possible to change the directory/file name format? I'd like to >>> create a data set that can be used by Ordnance Survey OpenSpace, which >>> uses >>> a different structure based on the British National Grid. >>> >>> Cheers, >>> >>> Peter >>> >>> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> Geowebcache-users mailing list >> Geo...@li... >> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >> >> >> > |
|
From: Arne K. <ar...@ti...> - 2010-06-11 12:03:15
|
So the plot thickens :) You mentioned in another email that you were just seeding one zoomlevel. So I guess you also made a grid set configuration with very few levels, perhaps just 2? This isn't useful for most GWC users, you need a tile pyramid with many levels sot hat people can gradually zoom in. I am guessing that for OpenSpace you'll be subsampling these tiles to make up intermediate resolutions. So this becomes a very extreme case that the default directory structure was definitely not designed to handle, normally you only have 1 to 16 tiles at zoomlevel 0 , so by the time you get down to 16 million tiles (z=12 or so) they should be spread across 4000 directories with 4000 files each. If you have a thought-out directory structure for your application, I would implement that as a new BlobStore, because your needs will be different from the normal GWC user. Alternatively, you could use the existing tile structure and just define a few resolutions you will not use, so that the level with 16 million tiles is z=12 or so. Note that you need to clear the cache (delete the directory) if you change the grid set definition. -Arne On 06/10/2010 10:05 PM, TMartin wrote: > Hi Arne > > Tim here. Pete and I are both working on the same issue here at OS. > > In each directory there are only 4 tiles of 250x250pixels and 1metre > resolution as specified in our geowebcache configuration xml document. > > If thats the case we may end up with over 3 million directories for the > 14,560,000 tiles we are creating. > > I will try and set it up so you can take a look at if you are interested? > > Tim > > > > Arne Kepp-2 wrote: > >> It's a compromise, and this part of the code is definitely made easily >> pluggable so that one can use a different structures for special needs. >> I was expecting that we would have a few different implementations by >> now, but apparently not. >> >> Every directory level requires a node to be looked up (move the disk >> head), so too many directories is not good either. Within a directory >> there is a proper index. Listing the content of the cache is generally >> not recommended, I don't know of a single filesystem that is optimized >> for that and most utilities can't handle something of this size. "find" >> is usually ok. >> >> But 400 000 directories.... Unless you've discovered a bug, you should >> have at least 80 billion tiles? I wouldn't use a conventional >> filesystem for that, and on a fast machine (100 tiles per second) it >> would take 25 years to build a cache of that size. >> >> -Arne >> >> >> On 06/10/2010 11:22 AM, samclemmens wrote: >> >>> Cheers, Arne! >>> >>> I'm curious; what was the rationale for creating a directory structure >>> with >>> just one level? We processed a data set earlier this week, which >>> resulted >>> in 400,000 directories. When I tried to an "ls", my PuTTY console >>> froze... >>> >>> Peter >>> >>> >>> >>> Arne Kepp-2 wrote: >>> >>> >>>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole >>>> world in 4 tiles >>>> >>>> 2) Hm,,, good point. I dont think I went back and updated that wiki >>>> page, it was not more than a scratchpad for sharing ideas. The actual >>>> code is here[1], the difference is that the bit shifting starts with 2, >>>> effectively making it 2^( 1 + ( z / 2 )). And there's some >>>> zero-padding. >>>> >>>> 3) The catch is that you need to know a little bit of Java, but if you >>>> do then it's potentially straightforward. It depends what the OpenSpace >>>> structure is based on, and whether we have similar tokens in GWC. >>>> >>>> blobstore/file contains an implementation of BlobStore. You can copy the >>>> whole thing to a new package, and then make the FilePathGenerator output >>>> the structure you want. It is linked into the application using Spring, >>>> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I >>>> would guess that you also want a particular GridSet to go with that, to >>>> match the resolutions and origin used by OpenSpace. >>>> >>>> It's possible someone has already done it, but so far I have not heard >>>> about it. >>>> >>>> -Arne >>>> >>>> 1: >>>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >>>> >>>> >>>> On 06/08/2010 05:33 PM, samclemmens wrote: >>>> >>>> >>>>> Hi, >>>>> >>>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>>>> directory/file name structure that is produced. I have a few >>>>> questions, >>>>> namely: >>>>> >>>>> 1. The documentation states that the naming convention is: >>>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension >>>>> What >>>>> is >>>>> the z value? >>>>> >>>>> 2. Given the sample output >>>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that >>>>> z=12, >>>>> x=848, and y=2540. Obviously the above formula won't produce 006_019 >>>>> based >>>>> on these numbers. What conversion is required? >>>>> >>>>> 3. Is it possible to change the directory/file name format? I'd like >>>>> to >>>>> create a data set that can be used by Ordnance Survey OpenSpace, which >>>>> uses >>>>> a different structure based on the British National Grid. >>>>> >>>>> Cheers, >>>>> >>>>> Peter >>>>> >>>>> >>>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> Geowebcache-users mailing list >>>> Geo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>> >>>> >>>> >>>> >>> >>> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> Geowebcache-users mailing list >> Geo...@li... >> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >> >> >> > |
|
From: Arne K. <ar...@ti...> - 2010-06-14 22:14:53
|
Hi Tim
I am saying that if you had defined the whole array of resolutions in
GWC then you would have gotten a reasonably good directory structure,
decimating the tiles differently due to z=10 instead of z=0.
Think of the directories as buckets, and a larger Z makes each directory
accept a larger range of X and Y values. At the same time it keeps tiles
that are close together in the same directory. Peter seems to have gone
through the formula in detail, so perhaps it's easier for him to explain
in person than it is for me via email.
Use this (you'll have to clear the cache, sorry)
<resolutions>
<double>2000</double>
<double>1000</double>
<double>500</double>
<double>200</double>
<double>100</double>
<double>50</double>
<double>25</double>
<double>10</double>
<double>5</double>
<double>2</double>
<double>1</double>
</resolutions>
and then just seed the last level and take that with you. The other
levels probably wont take long anyway, depending on how you've styled it.
-Arne
On 06/14/2010 04:28 PM, TMartin wrote:
> Hi Arne
>
> In our OpenSpace API we have 10 zoom levels and their resolutions.
>
> 10 Streetview 1m
>
> 9 Streetview Resampled 2m
>
> 8 1:50 000 5m
>
> 7 1:50 000 Resampled 10m
>
> 6 1:250 000 25m
>
> 5 1:250 000 Resampled 50m
>
> 4 MiniScale 100m
>
> 3 MiniScale Resampled 200m
>
> 2 Overview 500m
>
> 1 Overview Resampled 1000m
>
> 0 Overview 2000m
>
> These are already raster tiles and are chopped into either 200x200 pixel
> tiles or 250x250 pixel tiles depending on their original tile size.
>
> When OpenSpace was built we used our own tile naming convention and
> directory structure because there were no others in existence.
>
> We are now wanting to create tiles from a vector dataset to replace raster
> dataset at zoom levels 10 and 9 (resolution 1m and 2m)
>
> So we have PostGIS with the data in, Geoserver 2.0.2, style sheets (SLDs)
> and want to use Geowebcache to create the tiles in EPSG 27700 (obviously
> british national grid ;)
>
> After spending sometime with a calculator i have worked out that we need
> 14,560,000 250x250 pixel tiles to cover GB, with coordinates 0, 0, 700000,
> 1300000 at 1m resolution.
>
> This is the configuration file I have been using, and it works and creates
> tiles.
>
>
> <?xml version="1.0" encoding="utf-8"?>
> <gwcConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>
> xsi:noNamespaceSchemaLocation="http://geowebcache.org/schema/1.2.0/geowebcache.xsd"
> xmlns="http://geowebcache.org/schema/1.2.0">
> <version>1.2.0</version>
> <backendTimeout>300</backendTimeout>
>
> <gridSets>
> <!--
> String name;
> SRS srs;
> BoundingBox extent;
> double[] resolutions;
> double[] scales;
> String[] scaleNames;
> Integer levels;
> Integer tileHeight;
> Integer tileWidth;
> -->
>
>
>
> <gridSet>
> <name>osgb_vml</name>
> <srs><number>27700</number></srs>
> <extent>
> <coords>
> <double>0</double>
> <double>0</double>
> <double>700000</double>
> <double>1300000</double>
> </coords>
> </extent>
> <alignTopLeft>false</alignTopLeft>
> <resolutions>
> <double>1</double>
> </resolutions>
> <tileHeight>250</tileHeight>
> <tileWidth>250</tileWidth>
> </gridSet>
>
>
> </gridSets>
> <layers>
>
>
> <wmsLayer>
> <name>osgb:vml/name>
> <gridSubsets>
> <gridSubset>
> <gridSetName>osgb_vml</gridSetName>
> </gridSubset>
> </gridSubsets>
> <wmsUrl><string>http://localhost:8080/geoserver/wms</string></wmsUrl>
> <wmsLayers>osgb:areas,osgb:lines,osgb:roadclines,osgb:text</wmsLayers>
> <!-- OPTIONAL The metatiling factors used for this layer
> If not specified, 3x3 metatiling is used for image formats -->
> <metaWidthHeight><int>1</int><int>1</int></metaWidthHeight>
> <bgColor>0xFFFFFF</bgColor>
> </wmsLayer>
>
> </layers>
> </gwcConfiguration>
>
>
> So when i seed this layer and go to the setup page i only the option of
> seeding between zoom level 00 and finish zoom level 00.
>
> Which is fine.
>
> The problem as Pete mentioned above is the shear number of directories that
> only have 4 tiles in each folder.
>
> I hope that gives a better picture of what we are up do. I appreciate its an
> extreme case but when we get it sorted we will post up the solution and the
> timeframes it took.
>
> cheers
>
> Tim
>
>
>
>
>
> Arne Kepp-2 wrote:
>
>> So the plot thickens :)
>>
>> You mentioned in another email that you were just seeding one zoomlevel.
>> So I guess you also made a grid set configuration with very few levels,
>> perhaps just 2?
>>
>> This isn't useful for most GWC users, you need a tile pyramid with many
>> levels sot hat people can gradually zoom in. I am guessing that for
>> OpenSpace you'll be subsampling these tiles to make up intermediate
>> resolutions.
>>
>> So this becomes a very extreme case that the default directory structure
>> was definitely not designed to handle, normally you only have 1 to 16
>> tiles at zoomlevel 0 , so by the time you get down to 16 million tiles
>> (z=12 or so) they should be spread across 4000 directories with 4000
>> files each.
>>
>> If you have a thought-out directory structure for your application, I
>> would implement that as a new BlobStore, because your needs will be
>> different from the normal GWC user. Alternatively, you could use the
>> existing tile structure and just define a few resolutions you will not
>> use, so that the level with 16 million tiles is z=12 or so.
>>
>> Note that you need to clear the cache (delete the directory) if you
>> change the grid set definition.
>>
>> -Arne
>>
>>
>> On 06/10/2010 10:05 PM, TMartin wrote:
>>
>>> Hi Arne
>>>
>>> Tim here. Pete and I are both working on the same issue here at OS.
>>>
>>> In each directory there are only 4 tiles of 250x250pixels and 1metre
>>> resolution as specified in our geowebcache configuration xml document.
>>>
>>> If thats the case we may end up with over 3 million directories for the
>>> 14,560,000 tiles we are creating.
>>>
>>> I will try and set it up so you can take a look at if you are interested?
>>>
>>> Tim
>>>
>>>
>>>
>>> Arne Kepp-2 wrote:
>>>
>>>
>>>> It's a compromise, and this part of the code is definitely made easily
>>>> pluggable so that one can use a different structures for special needs.
>>>> I was expecting that we would have a few different implementations by
>>>> now, but apparently not.
>>>>
>>>> Every directory level requires a node to be looked up (move the disk
>>>> head), so too many directories is not good either. Within a directory
>>>> there is a proper index. Listing the content of the cache is generally
>>>> not recommended, I don't know of a single filesystem that is optimized
>>>> for that and most utilities can't handle something of this size. "find"
>>>> is usually ok.
>>>>
>>>> But 400 000 directories.... Unless you've discovered a bug, you should
>>>> have at least 80 billion tiles? I wouldn't use a conventional
>>>> filesystem for that, and on a fast machine (100 tiles per second) it
>>>> would take 25 years to build a cache of that size.
>>>>
>>>> -Arne
>>>>
>>>>
>>>> On 06/10/2010 11:22 AM, samclemmens wrote:
>>>>
>>>>
>>>>> Cheers, Arne!
>>>>>
>>>>> I'm curious; what was the rationale for creating a directory structure
>>>>> with
>>>>> just one level? We processed a data set earlier this week, which
>>>>> resulted
>>>>> in 400,000 directories. When I tried to an "ls", my PuTTY console
>>>>> froze...
>>>>>
>>>>> Peter
>>>>>
>>>>>
>>>>>
>>>>> Arne Kepp-2 wrote:
>>>>>
>>>>>
>>>>>
>>>>>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole
>>>>>> world in 4 tiles
>>>>>>
>>>>>> 2) Hm,,, good point. I dont think I went back and updated that wiki
>>>>>> page, it was not more than a scratchpad for sharing ideas. The actual
>>>>>> code is here[1], the difference is that the bit shifting starts with
>>>>>> 2,
>>>>>> effectively making it 2^( 1 + ( z / 2 )). And there's some
>>>>>> zero-padding.
>>>>>>
>>>>>> 3) The catch is that you need to know a little bit of Java, but if you
>>>>>> do then it's potentially straightforward. It depends what the
>>>>>> OpenSpace
>>>>>> structure is based on, and whether we have similar tokens in GWC.
>>>>>>
>>>>>> blobstore/file contains an implementation of BlobStore. You can copy
>>>>>> the
>>>>>> whole thing to a new package, and then make the FilePathGenerator
>>>>>> output
>>>>>> the structure you want. It is linked into the application using
>>>>>> Spring,
>>>>>> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I
>>>>>> would guess that you also want a particular GridSet to go with that,
>>>>>> to
>>>>>> match the resolutions and origin used by OpenSpace.
>>>>>>
>>>>>> It's possible someone has already done it, but so far I have not heard
>>>>>> about it.
>>>>>>
>>>>>> -Arne
>>>>>>
>>>>>> 1:
>>>>>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java
>>>>>>
>>>>>>
>>>>>> On 06/08/2010 05:33 PM, samclemmens wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the
>>>>>>> directory/file name structure that is produced. I have a few
>>>>>>> questions,
>>>>>>> namely:
>>>>>>>
>>>>>>> 1. The documentation states that the naming convention is:
>>>>>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension
>>>>>>> What
>>>>>>> is
>>>>>>> the z value?
>>>>>>>
>>>>>>> 2. Given the sample output
>>>>>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that
>>>>>>> z=12,
>>>>>>> x=848, and y=2540. Obviously the above formula won't produce 006_019
>>>>>>> based
>>>>>>> on these numbers. What conversion is required?
>>>>>>>
>>>>>>> 3. Is it possible to change the directory/file name format? I'd
>>>>>>> like
>>>>>>> to
>>>>>>> create a data set that can be used by Ordnance Survey OpenSpace,
>>>>>>> which
>>>>>>> uses
>>>>>>> a different structure based on the British National Grid.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> Peter
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>>>>>> lucky parental unit. See the prize list and enter to win:
>>>>>> http://p.sf.net/sfu/thinkgeek-promo
>>>>>> _______________________________________________
>>>>>> Geowebcache-users mailing list
>>>>>> Geo...@li...
>>>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>> ------------------------------------------------------------------------------
>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>>>> lucky parental unit. See the prize list and enter to win:
>>>> http://p.sf.net/sfu/thinkgeek-promo
>>>> _______________________________________________
>>>> Geowebcache-users mailing list
>>>> Geo...@li...
>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>> ------------------------------------------------------------------------------
>> ThinkGeek and WIRED's GeekDad team up for the Ultimate
>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
>> lucky parental unit. See the prize list and enter to win:
>> http://p.sf.net/sfu/thinkgeek-promo
>> _______________________________________________
>> Geowebcache-users mailing list
>> Geo...@li...
>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users
>>
>>
>>
>
|
|
From: TMartin <Tim...@or...> - 2010-06-15 09:23:18
|
Hi Arne Will give this a go at the end of the week and will let you know how we get on thanks Tim Arne Kepp-2 wrote: > > Hi Tim > > I am saying that if you had defined the whole array of resolutions in > GWC then you would have gotten a reasonably good directory structure, > decimating the tiles differently due to z=10 instead of z=0. > > Think of the directories as buckets, and a larger Z makes each directory > accept a larger range of X and Y values. At the same time it keeps tiles > that are close together in the same directory. Peter seems to have gone > through the formula in detail, so perhaps it's easier for him to explain > in person than it is for me via email. > > Use this (you'll have to clear the cache, sorry) > > <resolutions> > <double>2000</double> > <double>1000</double> > <double>500</double> > <double>200</double> > <double>100</double> > <double>50</double> > <double>25</double> > <double>10</double> > <double>5</double> > <double>2</double> > <double>1</double> > </resolutions> > > > and then just seed the last level and take that with you. The other > levels probably wont take long anyway, depending on how you've styled it. > > -Arne > > > > On 06/14/2010 04:28 PM, TMartin wrote: >> Hi Arne >> >> In our OpenSpace API we have 10 zoom levels and their resolutions. >> >> 10 Streetview 1m >> >> 9 Streetview Resampled 2m >> >> 8 1:50 000 5m >> >> 7 1:50 000 Resampled 10m >> >> 6 1:250 000 25m >> >> 5 1:250 000 Resampled 50m >> >> 4 MiniScale 100m >> >> 3 MiniScale Resampled 200m >> >> 2 Overview 500m >> >> 1 Overview Resampled 1000m >> >> 0 Overview 2000m >> >> These are already raster tiles and are chopped into either 200x200 pixel >> tiles or 250x250 pixel tiles depending on their original tile size. >> >> When OpenSpace was built we used our own tile naming convention and >> directory structure because there were no others in existence. >> >> We are now wanting to create tiles from a vector dataset to replace >> raster >> dataset at zoom levels 10 and 9 (resolution 1m and 2m) >> >> So we have PostGIS with the data in, Geoserver 2.0.2, style sheets (SLDs) >> and want to use Geowebcache to create the tiles in EPSG 27700 (obviously >> british national grid ;) >> >> After spending sometime with a calculator i have worked out that we need >> 14,560,000 250x250 pixel tiles to cover GB, with coordinates 0, 0, >> 700000, >> 1300000 at 1m resolution. >> >> This is the configuration file I have been using, and it works and >> creates >> tiles. >> >> >> <?xml version="1.0" encoding="utf-8"?> >> <gwcConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >> >> xsi:noNamespaceSchemaLocation="http://geowebcache.org/schema/1.2.0/geowebcache.xsd" >> xmlns="http://geowebcache.org/schema/1.2.0"> >> <version>1.2.0</version> >> <backendTimeout>300</backendTimeout> >> >> <gridSets> >> <!-- >> String name; >> SRS srs; >> BoundingBox extent; >> double[] resolutions; >> double[] scales; >> String[] scaleNames; >> Integer levels; >> Integer tileHeight; >> Integer tileWidth; >> --> >> >> >> >> <gridSet> >> <name>osgb_vml</name> >> <srs><number>27700</number></srs> >> <extent> >> <coords> >> <double>0</double> >> <double>0</double> >> <double>700000</double> >> <double>1300000</double> >> </coords> >> </extent> >> <alignTopLeft>false</alignTopLeft> >> <resolutions> >> <double>1</double> >> </resolutions> >> <tileHeight>250</tileHeight> >> <tileWidth>250</tileWidth> >> </gridSet> >> >> >> </gridSets> >> <layers> >> >> >> <wmsLayer> >> <name>osgb:vml/name> >> <gridSubsets> >> <gridSubset> >> <gridSetName>osgb_vml</gridSetName> >> </gridSubset> >> </gridSubsets> >> >> <wmsUrl><string>http://localhost:8080/geoserver/wms</string></wmsUrl> >> >> <wmsLayers>osgb:areas,osgb:lines,osgb:roadclines,osgb:text</wmsLayers> >> <!-- OPTIONAL The metatiling factors used for this layer >> If not specified, 3x3 metatiling is used for image formats --> >> <metaWidthHeight><int>1</int><int>1</int></metaWidthHeight> >> <bgColor>0xFFFFFF</bgColor> >> </wmsLayer> >> >> </layers> >> </gwcConfiguration> >> >> >> So when i seed this layer and go to the setup page i only the option of >> seeding between zoom level 00 and finish zoom level 00. >> >> Which is fine. >> >> The problem as Pete mentioned above is the shear number of directories >> that >> only have 4 tiles in each folder. >> >> I hope that gives a better picture of what we are up do. I appreciate its >> an >> extreme case but when we get it sorted we will post up the solution and >> the >> timeframes it took. >> >> cheers >> >> Tim >> >> >> >> >> >> Arne Kepp-2 wrote: >> >>> So the plot thickens :) >>> >>> You mentioned in another email that you were just seeding one zoomlevel. >>> So I guess you also made a grid set configuration with very few levels, >>> perhaps just 2? >>> >>> This isn't useful for most GWC users, you need a tile pyramid with many >>> levels sot hat people can gradually zoom in. I am guessing that for >>> OpenSpace you'll be subsampling these tiles to make up intermediate >>> resolutions. >>> >>> So this becomes a very extreme case that the default directory structure >>> was definitely not designed to handle, normally you only have 1 to 16 >>> tiles at zoomlevel 0 , so by the time you get down to 16 million tiles >>> (z=12 or so) they should be spread across 4000 directories with 4000 >>> files each. >>> >>> If you have a thought-out directory structure for your application, I >>> would implement that as a new BlobStore, because your needs will be >>> different from the normal GWC user. Alternatively, you could use the >>> existing tile structure and just define a few resolutions you will not >>> use, so that the level with 16 million tiles is z=12 or so. >>> >>> Note that you need to clear the cache (delete the directory) if you >>> change the grid set definition. >>> >>> -Arne >>> >>> >>> On 06/10/2010 10:05 PM, TMartin wrote: >>> >>>> Hi Arne >>>> >>>> Tim here. Pete and I are both working on the same issue here at OS. >>>> >>>> In each directory there are only 4 tiles of 250x250pixels and 1metre >>>> resolution as specified in our geowebcache configuration xml document. >>>> >>>> If thats the case we may end up with over 3 million directories for the >>>> 14,560,000 tiles we are creating. >>>> >>>> I will try and set it up so you can take a look at if you are >>>> interested? >>>> >>>> Tim >>>> >>>> >>>> >>>> Arne Kepp-2 wrote: >>>> >>>> >>>>> It's a compromise, and this part of the code is definitely made easily >>>>> pluggable so that one can use a different structures for special >>>>> needs. >>>>> I was expecting that we would have a few different implementations by >>>>> now, but apparently not. >>>>> >>>>> Every directory level requires a node to be looked up (move the disk >>>>> head), so too many directories is not good either. Within a directory >>>>> there is a proper index. Listing the content of the cache is generally >>>>> not recommended, I don't know of a single filesystem that is optimized >>>>> for that and most utilities can't handle something of this size. >>>>> "find" >>>>> is usually ok. >>>>> >>>>> But 400 000 directories.... Unless you've discovered a bug, you >>>>> should >>>>> have at least 80 billion tiles? I wouldn't use a conventional >>>>> filesystem for that, and on a fast machine (100 tiles per second) it >>>>> would take 25 years to build a cache of that size. >>>>> >>>>> -Arne >>>>> >>>>> >>>>> On 06/10/2010 11:22 AM, samclemmens wrote: >>>>> >>>>> >>>>>> Cheers, Arne! >>>>>> >>>>>> I'm curious; what was the rationale for creating a directory >>>>>> structure >>>>>> with >>>>>> just one level? We processed a data set earlier this week, which >>>>>> resulted >>>>>> in 400,000 directories. When I tried to an "ls", my PuTTY console >>>>>> froze... >>>>>> >>>>>> Peter >>>>>> >>>>>> >>>>>> >>>>>> Arne Kepp-2 wrote: >>>>>> >>>>>> >>>>>> >>>>>>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the >>>>>>> whole >>>>>>> world in 4 tiles >>>>>>> >>>>>>> 2) Hm,,, good point. I dont think I went back and updated that wiki >>>>>>> page, it was not more than a scratchpad for sharing ideas. The >>>>>>> actual >>>>>>> code is here[1], the difference is that the bit shifting starts with >>>>>>> 2, >>>>>>> effectively making it 2^( 1 + ( z / 2 )). And there's some >>>>>>> zero-padding. >>>>>>> >>>>>>> 3) The catch is that you need to know a little bit of Java, but if >>>>>>> you >>>>>>> do then it's potentially straightforward. It depends what the >>>>>>> OpenSpace >>>>>>> structure is based on, and whether we have similar tokens in GWC. >>>>>>> >>>>>>> blobstore/file contains an implementation of BlobStore. You can copy >>>>>>> the >>>>>>> whole thing to a new package, and then make the FilePathGenerator >>>>>>> output >>>>>>> the structure you want. It is linked into the application using >>>>>>> Spring, >>>>>>> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. >>>>>>> I >>>>>>> would guess that you also want a particular GridSet to go with that, >>>>>>> to >>>>>>> match the resolutions and origin used by OpenSpace. >>>>>>> >>>>>>> It's possible someone has already done it, but so far I have not >>>>>>> heard >>>>>>> about it. >>>>>>> >>>>>>> -Arne >>>>>>> >>>>>>> 1: >>>>>>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >>>>>>> >>>>>>> >>>>>>> On 06/08/2010 05:33 PM, samclemmens wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>>>>>>> directory/file name structure that is produced. I have a few >>>>>>>> questions, >>>>>>>> namely: >>>>>>>> >>>>>>>> 1. The documentation states that the naming convention is: >>>>>>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension >>>>>>>> What >>>>>>>> is >>>>>>>> the z value? >>>>>>>> >>>>>>>> 2. Given the sample output >>>>>>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that >>>>>>>> z=12, >>>>>>>> x=848, and y=2540. Obviously the above formula won't produce >>>>>>>> 006_019 >>>>>>>> based >>>>>>>> on these numbers. What conversion is required? >>>>>>>> >>>>>>>> 3. Is it possible to change the directory/file name format? I'd >>>>>>>> like >>>>>>>> to >>>>>>>> create a data set that can be used by Ordnance Survey OpenSpace, >>>>>>>> which >>>>>>>> uses >>>>>>>> a different structure based on the British National Grid. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> Peter >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>>>> lucky parental unit. See the prize list and enter to win: >>>>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>>>> _______________________________________________ >>>>>>> Geowebcache-users mailing list >>>>>>> Geo...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>> lucky parental unit. See the prize list and enter to win: >>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>> _______________________________________________ >>>>> Geowebcache-users mailing list >>>>> Geo...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> Geowebcache-users mailing list >>> Geo...@li... >>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>> >>> >>> >> > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Geowebcache-users mailing list > Geo...@li... > https://lists.sourceforge.net/lists/listinfo/geowebcache-users > > -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28888993.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: TMartin <Tim...@or...> - 2010-06-10 20:05:58
|
Hi Arne Tim here. Pete and I are both working on the same issue here at OS. In each directory there are only 4 tiles of 250x250pixels and 1metre resolution as specified in our geowebcache configuration xml document. If thats the case we may end up with over 3 million directories for the 14,560,000 tiles we are creating. I will try and set it up so you can take a look at if you are interested? Tim Arne Kepp-2 wrote: > > It's a compromise, and this part of the code is definitely made easily > pluggable so that one can use a different structures for special needs. > I was expecting that we would have a few different implementations by > now, but apparently not. > > Every directory level requires a node to be looked up (move the disk > head), so too many directories is not good either. Within a directory > there is a proper index. Listing the content of the cache is generally > not recommended, I don't know of a single filesystem that is optimized > for that and most utilities can't handle something of this size. "find" > is usually ok. > > But 400 000 directories.... Unless you've discovered a bug, you should > have at least 80 billion tiles? I wouldn't use a conventional > filesystem for that, and on a fast machine (100 tiles per second) it > would take 25 years to build a cache of that size. > > -Arne > > > On 06/10/2010 11:22 AM, samclemmens wrote: >> Cheers, Arne! >> >> I'm curious; what was the rationale for creating a directory structure >> with >> just one level? We processed a data set earlier this week, which >> resulted >> in 400,000 directories. When I tried to an "ls", my PuTTY console >> froze... >> >> Peter >> >> >> >> Arne Kepp-2 wrote: >> >>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole >>> world in 4 tiles >>> >>> 2) Hm,,, good point. I dont think I went back and updated that wiki >>> page, it was not more than a scratchpad for sharing ideas. The actual >>> code is here[1], the difference is that the bit shifting starts with 2, >>> effectively making it 2^( 1 + ( z / 2 )). And there's some >>> zero-padding. >>> >>> 3) The catch is that you need to know a little bit of Java, but if you >>> do then it's potentially straightforward. It depends what the OpenSpace >>> structure is based on, and whether we have similar tokens in GWC. >>> >>> blobstore/file contains an implementation of BlobStore. You can copy the >>> whole thing to a new package, and then make the FilePathGenerator output >>> the structure you want. It is linked into the application using Spring, >>> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I >>> would guess that you also want a particular GridSet to go with that, to >>> match the resolutions and origin used by OpenSpace. >>> >>> It's possible someone has already done it, but so far I have not heard >>> about it. >>> >>> -Arne >>> >>> 1: >>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >>> >>> >>> On 06/08/2010 05:33 PM, samclemmens wrote: >>> >>>> Hi, >>>> >>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>>> directory/file name structure that is produced. I have a few >>>> questions, >>>> namely: >>>> >>>> 1. The documentation states that the naming convention is: >>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension >>>> What >>>> is >>>> the z value? >>>> >>>> 2. Given the sample output >>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that >>>> z=12, >>>> x=848, and y=2540. Obviously the above formula won't produce 006_019 >>>> based >>>> on these numbers. What conversion is required? >>>> >>>> 3. Is it possible to change the directory/file name format? I'd like >>>> to >>>> create a data set that can be used by Ordnance Survey OpenSpace, which >>>> uses >>>> a different structure based on the British National Grid. >>>> >>>> Cheers, >>>> >>>> Peter >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> Geowebcache-users mailing list >>> Geo...@li... >>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>> >>> >>> >> > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Geowebcache-users mailing list > Geo...@li... > https://lists.sourceforge.net/lists/listinfo/geowebcache-users > > -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28847840.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: TMartin <Tim...@or...> - 2010-06-14 14:28:50
|
Hi Arne In our OpenSpace API we have 10 zoom levels and their resolutions. 10 Streetview 1m 9 Streetview Resampled 2m 8 1:50 000 5m 7 1:50 000 Resampled 10m 6 1:250 000 25m 5 1:250 000 Resampled 50m 4 MiniScale 100m 3 MiniScale Resampled 200m 2 Overview 500m 1 Overview Resampled 1000m 0 Overview 2000m These are already raster tiles and are chopped into either 200x200 pixel tiles or 250x250 pixel tiles depending on their original tile size. When OpenSpace was built we used our own tile naming convention and directory structure because there were no others in existence. We are now wanting to create tiles from a vector dataset to replace raster dataset at zoom levels 10 and 9 (resolution 1m and 2m) So we have PostGIS with the data in, Geoserver 2.0.2, style sheets (SLDs) and want to use Geowebcache to create the tiles in EPSG 27700 (obviously british national grid ;) After spending sometime with a calculator i have worked out that we need 14,560,000 250x250 pixel tiles to cover GB, with coordinates 0, 0, 700000, 1300000 at 1m resolution. This is the configuration file I have been using, and it works and creates tiles. <?xml version="1.0" encoding="utf-8"?> <gwcConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://geowebcache.org/schema/1.2.0/geowebcache.xsd" xmlns="http://geowebcache.org/schema/1.2.0"> <version>1.2.0</version> <backendTimeout>300</backendTimeout> <gridSets> <!-- String name; SRS srs; BoundingBox extent; double[] resolutions; double[] scales; String[] scaleNames; Integer levels; Integer tileHeight; Integer tileWidth; --> <gridSet> <name>osgb_vml</name> <srs><number>27700</number></srs> <extent> <coords> <double>0</double> <double>0</double> <double>700000</double> <double>1300000</double> </coords> </extent> <alignTopLeft>false</alignTopLeft> <resolutions> <double>1</double> </resolutions> <tileHeight>250</tileHeight> <tileWidth>250</tileWidth> </gridSet> </gridSets> <layers> <wmsLayer> <name>osgb:vml/name> <gridSubsets> <gridSubset> <gridSetName>osgb_vml</gridSetName> </gridSubset> </gridSubsets> <wmsUrl><string>http://localhost:8080/geoserver/wms</string></wmsUrl> <wmsLayers>osgb:areas,osgb:lines,osgb:roadclines,osgb:text</wmsLayers> <!-- OPTIONAL The metatiling factors used for this layer If not specified, 3x3 metatiling is used for image formats --> <metaWidthHeight><int>1</int><int>1</int></metaWidthHeight> <bgColor>0xFFFFFF</bgColor> </wmsLayer> </layers> </gwcConfiguration> So when i seed this layer and go to the setup page i only the option of seeding between zoom level 00 and finish zoom level 00. Which is fine. The problem as Pete mentioned above is the shear number of directories that only have 4 tiles in each folder. I hope that gives a better picture of what we are up do. I appreciate its an extreme case but when we get it sorted we will post up the solution and the timeframes it took. cheers Tim Arne Kepp-2 wrote: > > So the plot thickens :) > > You mentioned in another email that you were just seeding one zoomlevel. > So I guess you also made a grid set configuration with very few levels, > perhaps just 2? > > This isn't useful for most GWC users, you need a tile pyramid with many > levels sot hat people can gradually zoom in. I am guessing that for > OpenSpace you'll be subsampling these tiles to make up intermediate > resolutions. > > So this becomes a very extreme case that the default directory structure > was definitely not designed to handle, normally you only have 1 to 16 > tiles at zoomlevel 0 , so by the time you get down to 16 million tiles > (z=12 or so) they should be spread across 4000 directories with 4000 > files each. > > If you have a thought-out directory structure for your application, I > would implement that as a new BlobStore, because your needs will be > different from the normal GWC user. Alternatively, you could use the > existing tile structure and just define a few resolutions you will not > use, so that the level with 16 million tiles is z=12 or so. > > Note that you need to clear the cache (delete the directory) if you > change the grid set definition. > > -Arne > > > On 06/10/2010 10:05 PM, TMartin wrote: >> Hi Arne >> >> Tim here. Pete and I are both working on the same issue here at OS. >> >> In each directory there are only 4 tiles of 250x250pixels and 1metre >> resolution as specified in our geowebcache configuration xml document. >> >> If thats the case we may end up with over 3 million directories for the >> 14,560,000 tiles we are creating. >> >> I will try and set it up so you can take a look at if you are interested? >> >> Tim >> >> >> >> Arne Kepp-2 wrote: >> >>> It's a compromise, and this part of the code is definitely made easily >>> pluggable so that one can use a different structures for special needs. >>> I was expecting that we would have a few different implementations by >>> now, but apparently not. >>> >>> Every directory level requires a node to be looked up (move the disk >>> head), so too many directories is not good either. Within a directory >>> there is a proper index. Listing the content of the cache is generally >>> not recommended, I don't know of a single filesystem that is optimized >>> for that and most utilities can't handle something of this size. "find" >>> is usually ok. >>> >>> But 400 000 directories.... Unless you've discovered a bug, you should >>> have at least 80 billion tiles? I wouldn't use a conventional >>> filesystem for that, and on a fast machine (100 tiles per second) it >>> would take 25 years to build a cache of that size. >>> >>> -Arne >>> >>> >>> On 06/10/2010 11:22 AM, samclemmens wrote: >>> >>>> Cheers, Arne! >>>> >>>> I'm curious; what was the rationale for creating a directory structure >>>> with >>>> just one level? We processed a data set earlier this week, which >>>> resulted >>>> in 400,000 directories. When I tried to an "ls", my PuTTY console >>>> froze... >>>> >>>> Peter >>>> >>>> >>>> >>>> Arne Kepp-2 wrote: >>>> >>>> >>>>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the whole >>>>> world in 4 tiles >>>>> >>>>> 2) Hm,,, good point. I dont think I went back and updated that wiki >>>>> page, it was not more than a scratchpad for sharing ideas. The actual >>>>> code is here[1], the difference is that the bit shifting starts with >>>>> 2, >>>>> effectively making it 2^( 1 + ( z / 2 )). And there's some >>>>> zero-padding. >>>>> >>>>> 3) The catch is that you need to know a little bit of Java, but if you >>>>> do then it's potentially straightforward. It depends what the >>>>> OpenSpace >>>>> structure is based on, and whether we have similar tokens in GWC. >>>>> >>>>> blobstore/file contains an implementation of BlobStore. You can copy >>>>> the >>>>> whole thing to a new package, and then make the FilePathGenerator >>>>> output >>>>> the structure you want. It is linked into the application using >>>>> Spring, >>>>> WEB-INF/geowebcache-core-context.xml , if you are working off trunk. I >>>>> would guess that you also want a particular GridSet to go with that, >>>>> to >>>>> match the resolutions and origin used by OpenSpace. >>>>> >>>>> It's possible someone has already done it, but so far I have not heard >>>>> about it. >>>>> >>>>> -Arne >>>>> >>>>> 1: >>>>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >>>>> >>>>> >>>>> On 06/08/2010 05:33 PM, samclemmens wrote: >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>>>>> directory/file name structure that is produced. I have a few >>>>>> questions, >>>>>> namely: >>>>>> >>>>>> 1. The documentation states that the naming convention is: >>>>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension >>>>>> What >>>>>> is >>>>>> the z value? >>>>>> >>>>>> 2. Given the sample output >>>>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume that >>>>>> z=12, >>>>>> x=848, and y=2540. Obviously the above formula won't produce 006_019 >>>>>> based >>>>>> on these numbers. What conversion is required? >>>>>> >>>>>> 3. Is it possible to change the directory/file name format? I'd >>>>>> like >>>>>> to >>>>>> create a data set that can be used by Ordnance Survey OpenSpace, >>>>>> which >>>>>> uses >>>>>> a different structure based on the British National Grid. >>>>>> >>>>>> Cheers, >>>>>> >>>>>> Peter >>>>>> >>>>>> >>>>>> >>>>> ------------------------------------------------------------------------------ >>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>> lucky parental unit. See the prize list and enter to win: >>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>> _______________________________________________ >>>>> Geowebcache-users mailing list >>>>> Geo...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>> lucky parental unit. See the prize list and enter to win: >>> http://p.sf.net/sfu/thinkgeek-promo >>> _______________________________________________ >>> Geowebcache-users mailing list >>> Geo...@li... >>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>> >>> >>> >> > > > ------------------------------------------------------------------------------ > ThinkGeek and WIRED's GeekDad team up for the Ultimate > GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the > lucky parental unit. See the prize list and enter to win: > http://p.sf.net/sfu/thinkgeek-promo > _______________________________________________ > Geowebcache-users mailing list > Geo...@li... > https://lists.sourceforge.net/lists/listinfo/geowebcache-users > > -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28880229.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: samclemmens <geo...@gm...> - 2010-06-18 11:23:34
|
Hi Arne, Tim is processing the data in EC2 per your recommendation above. He estimates that it will take 30 hours to process national coverage for Great Britain and then another 15 hours to create the spatial indexes... In the meantime, I have a question about the parameters in the FilePathGenerator class. (Some of these are obvious, but I just want to confirm.) Can you tell me what the following are? - prefix - tileIndex - gridSetId - parameters_id - x (northing?) - y (easting?) - gridSetStr By the way, have you guys considered creating an AMI with the new OpenGeo stack? I got most of the individual pieces working on a Fedora instance last month, but it would have been a heck of a lot easier to just fire up an AMI... Thanks, Peter TMartin wrote: > > Hi Arne > > Will give this a go at the end of the week and will let you know how we > get on > > thanks > > Tim > > > > Arne Kepp-2 wrote: >> >> Hi Tim >> >> I am saying that if you had defined the whole array of resolutions in >> GWC then you would have gotten a reasonably good directory structure, >> decimating the tiles differently due to z=10 instead of z=0. >> >> Think of the directories as buckets, and a larger Z makes each directory >> accept a larger range of X and Y values. At the same time it keeps tiles >> that are close together in the same directory. Peter seems to have gone >> through the formula in detail, so perhaps it's easier for him to explain >> in person than it is for me via email. >> >> Use this (you'll have to clear the cache, sorry) >> >> <resolutions> >> <double>2000</double> >> <double>1000</double> >> <double>500</double> >> <double>200</double> >> <double>100</double> >> <double>50</double> >> <double>25</double> >> <double>10</double> >> <double>5</double> >> <double>2</double> >> <double>1</double> >> </resolutions> >> >> >> and then just seed the last level and take that with you. The other >> levels probably wont take long anyway, depending on how you've styled it. >> >> -Arne >> >> >> >> On 06/14/2010 04:28 PM, TMartin wrote: >>> Hi Arne >>> >>> In our OpenSpace API we have 10 zoom levels and their resolutions. >>> >>> 10 Streetview 1m >>> >>> 9 Streetview Resampled 2m >>> >>> 8 1:50 000 5m >>> >>> 7 1:50 000 Resampled 10m >>> >>> 6 1:250 000 25m >>> >>> 5 1:250 000 Resampled 50m >>> >>> 4 MiniScale 100m >>> >>> 3 MiniScale Resampled 200m >>> >>> 2 Overview 500m >>> >>> 1 Overview Resampled 1000m >>> >>> 0 Overview 2000m >>> >>> These are already raster tiles and are chopped into either 200x200 pixel >>> tiles or 250x250 pixel tiles depending on their original tile size. >>> >>> When OpenSpace was built we used our own tile naming convention and >>> directory structure because there were no others in existence. >>> >>> We are now wanting to create tiles from a vector dataset to replace >>> raster >>> dataset at zoom levels 10 and 9 (resolution 1m and 2m) >>> >>> So we have PostGIS with the data in, Geoserver 2.0.2, style sheets >>> (SLDs) >>> and want to use Geowebcache to create the tiles in EPSG 27700 (obviously >>> british national grid ;) >>> >>> After spending sometime with a calculator i have worked out that we need >>> 14,560,000 250x250 pixel tiles to cover GB, with coordinates 0, 0, >>> 700000, >>> 1300000 at 1m resolution. >>> >>> This is the configuration file I have been using, and it works and >>> creates >>> tiles. >>> >>> >>> <?xml version="1.0" encoding="utf-8"?> >>> <gwcConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >>> >>> xsi:noNamespaceSchemaLocation="http://geowebcache.org/schema/1.2.0/geowebcache.xsd" >>> xmlns="http://geowebcache.org/schema/1.2.0"> >>> <version>1.2.0</version> >>> <backendTimeout>300</backendTimeout> >>> >>> <gridSets> >>> <!-- >>> String name; >>> SRS srs; >>> BoundingBox extent; >>> double[] resolutions; >>> double[] scales; >>> String[] scaleNames; >>> Integer levels; >>> Integer tileHeight; >>> Integer tileWidth; >>> --> >>> >>> >>> >>> <gridSet> >>> <name>osgb_vml</name> >>> <srs><number>27700</number></srs> >>> <extent> >>> <coords> >>> <double>0</double> >>> <double>0</double> >>> <double>700000</double> >>> <double>1300000</double> >>> </coords> >>> </extent> >>> <alignTopLeft>false</alignTopLeft> >>> <resolutions> >>> <double>1</double> >>> </resolutions> >>> <tileHeight>250</tileHeight> >>> <tileWidth>250</tileWidth> >>> </gridSet> >>> >>> >>> </gridSets> >>> <layers> >>> >>> >>> <wmsLayer> >>> <name>osgb:vml/name> >>> <gridSubsets> >>> <gridSubset> >>> <gridSetName>osgb_vml</gridSetName> >>> </gridSubset> >>> </gridSubsets> >>> >>> <wmsUrl><string>http://localhost:8080/geoserver/wms</string></wmsUrl> >>> >>> <wmsLayers>osgb:areas,osgb:lines,osgb:roadclines,osgb:text</wmsLayers> >>> <!-- OPTIONAL The metatiling factors used for this layer >>> If not specified, 3x3 metatiling is used for image formats --> >>> <metaWidthHeight><int>1</int><int>1</int></metaWidthHeight> >>> <bgColor>0xFFFFFF</bgColor> >>> </wmsLayer> >>> >>> </layers> >>> </gwcConfiguration> >>> >>> >>> So when i seed this layer and go to the setup page i only the option of >>> seeding between zoom level 00 and finish zoom level 00. >>> >>> Which is fine. >>> >>> The problem as Pete mentioned above is the shear number of directories >>> that >>> only have 4 tiles in each folder. >>> >>> I hope that gives a better picture of what we are up do. I appreciate >>> its an >>> extreme case but when we get it sorted we will post up the solution and >>> the >>> timeframes it took. >>> >>> cheers >>> >>> Tim >>> >>> >>> >>> >>> >>> Arne Kepp-2 wrote: >>> >>>> So the plot thickens :) >>>> >>>> You mentioned in another email that you were just seeding one >>>> zoomlevel. >>>> So I guess you also made a grid set configuration with very few levels, >>>> perhaps just 2? >>>> >>>> This isn't useful for most GWC users, you need a tile pyramid with many >>>> levels sot hat people can gradually zoom in. I am guessing that for >>>> OpenSpace you'll be subsampling these tiles to make up intermediate >>>> resolutions. >>>> >>>> So this becomes a very extreme case that the default directory >>>> structure >>>> was definitely not designed to handle, normally you only have 1 to 16 >>>> tiles at zoomlevel 0 , so by the time you get down to 16 million tiles >>>> (z=12 or so) they should be spread across 4000 directories with 4000 >>>> files each. >>>> >>>> If you have a thought-out directory structure for your application, I >>>> would implement that as a new BlobStore, because your needs will be >>>> different from the normal GWC user. Alternatively, you could use the >>>> existing tile structure and just define a few resolutions you will not >>>> use, so that the level with 16 million tiles is z=12 or so. >>>> >>>> Note that you need to clear the cache (delete the directory) if you >>>> change the grid set definition. >>>> >>>> -Arne >>>> >>>> >>>> On 06/10/2010 10:05 PM, TMartin wrote: >>>> >>>>> Hi Arne >>>>> >>>>> Tim here. Pete and I are both working on the same issue here at OS. >>>>> >>>>> In each directory there are only 4 tiles of 250x250pixels and 1metre >>>>> resolution as specified in our geowebcache configuration xml document. >>>>> >>>>> If thats the case we may end up with over 3 million directories for >>>>> the >>>>> 14,560,000 tiles we are creating. >>>>> >>>>> I will try and set it up so you can take a look at if you are >>>>> interested? >>>>> >>>>> Tim >>>>> >>>>> >>>>> >>>>> Arne Kepp-2 wrote: >>>>> >>>>> >>>>>> It's a compromise, and this part of the code is definitely made >>>>>> easily >>>>>> pluggable so that one can use a different structures for special >>>>>> needs. >>>>>> I was expecting that we would have a few different implementations by >>>>>> now, but apparently not. >>>>>> >>>>>> Every directory level requires a node to be looked up (move the disk >>>>>> head), so too many directories is not good either. Within a directory >>>>>> there is a proper index. Listing the content of the cache is >>>>>> generally >>>>>> not recommended, I don't know of a single filesystem that is >>>>>> optimized >>>>>> for that and most utilities can't handle something of this size. >>>>>> "find" >>>>>> is usually ok. >>>>>> >>>>>> But 400 000 directories.... Unless you've discovered a bug, you >>>>>> should >>>>>> have at least 80 billion tiles? I wouldn't use a conventional >>>>>> filesystem for that, and on a fast machine (100 tiles per second) it >>>>>> would take 25 years to build a cache of that size. >>>>>> >>>>>> -Arne >>>>>> >>>>>> >>>>>> On 06/10/2010 11:22 AM, samclemmens wrote: >>>>>> >>>>>> >>>>>>> Cheers, Arne! >>>>>>> >>>>>>> I'm curious; what was the rationale for creating a directory >>>>>>> structure >>>>>>> with >>>>>>> just one level? We processed a data set earlier this week, which >>>>>>> resulted >>>>>>> in 400,000 directories. When I tried to an "ls", my PuTTY console >>>>>>> froze... >>>>>>> >>>>>>> Peter >>>>>>> >>>>>>> >>>>>>> >>>>>>> Arne Kepp-2 wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> 1) "zoom level". For the default EPSG:900913 grid z=0 means the >>>>>>>> whole >>>>>>>> world in 4 tiles >>>>>>>> >>>>>>>> 2) Hm,,, good point. I dont think I went back and updated that wiki >>>>>>>> page, it was not more than a scratchpad for sharing ideas. The >>>>>>>> actual >>>>>>>> code is here[1], the difference is that the bit shifting starts >>>>>>>> with >>>>>>>> 2, >>>>>>>> effectively making it 2^( 1 + ( z / 2 )). And there's some >>>>>>>> zero-padding. >>>>>>>> >>>>>>>> 3) The catch is that you need to know a little bit of Java, but if >>>>>>>> you >>>>>>>> do then it's potentially straightforward. It depends what the >>>>>>>> OpenSpace >>>>>>>> structure is based on, and whether we have similar tokens in GWC. >>>>>>>> >>>>>>>> blobstore/file contains an implementation of BlobStore. You can >>>>>>>> copy >>>>>>>> the >>>>>>>> whole thing to a new package, and then make the FilePathGenerator >>>>>>>> output >>>>>>>> the structure you want. It is linked into the application using >>>>>>>> Spring, >>>>>>>> WEB-INF/geowebcache-core-context.xml , if you are working off >>>>>>>> trunk. I >>>>>>>> would guess that you also want a particular GridSet to go with >>>>>>>> that, >>>>>>>> to >>>>>>>> match the resolutions and origin used by OpenSpace. >>>>>>>> >>>>>>>> It's possible someone has already done it, but so far I have not >>>>>>>> heard >>>>>>>> about it. >>>>>>>> >>>>>>>> -Arne >>>>>>>> >>>>>>>> 1: >>>>>>>> http://geowebcache.org/trac/browser/trunk/geowebcache/core/src/main/java/org/geowebcache/storage/blobstore/file/FilePathGenerator.java >>>>>>>> >>>>>>>> >>>>>>>> On 06/08/2010 05:33 PM, samclemmens wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I'm relatively new to GeoWebCache...and I'm a bit puzzled by the >>>>>>>>> directory/file name structure that is produced. I have a few >>>>>>>>> questions, >>>>>>>>> namely: >>>>>>>>> >>>>>>>>> 1. The documentation states that the naming convention is: >>>>>>>>> layername/projection_z/[x/(2^(z/2))]_[y/(2^(z/2))]/x_y.extension >>>>>>>>> What >>>>>>>>> is >>>>>>>>> the z value? >>>>>>>>> >>>>>>>>> 2. Given the sample output >>>>>>>>> .\gwc\roads\EPSG_900913_12\006_019\000848_002540.png, I assume >>>>>>>>> that >>>>>>>>> z=12, >>>>>>>>> x=848, and y=2540. Obviously the above formula won't produce >>>>>>>>> 006_019 >>>>>>>>> based >>>>>>>>> on these numbers. What conversion is required? >>>>>>>>> >>>>>>>>> 3. Is it possible to change the directory/file name format? I'd >>>>>>>>> like >>>>>>>>> to >>>>>>>>> create a data set that can be used by Ordnance Survey OpenSpace, >>>>>>>>> which >>>>>>>>> uses >>>>>>>>> a different structure based on the British National Grid. >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> >>>>>>>>> Peter >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>>>>> lucky parental unit. See the prize list and enter to win: >>>>>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>>>>> _______________________________________________ >>>>>>>> Geowebcache-users mailing list >>>>>>>> Geo...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>>>> lucky parental unit. See the prize list and enter to win: >>>>>> http://p.sf.net/sfu/thinkgeek-promo >>>>>> _______________________________________________ >>>>>> Geowebcache-users mailing list >>>>>> Geo...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> ThinkGeek and WIRED's GeekDad team up for the Ultimate >>>> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >>>> lucky parental unit. See the prize list and enter to win: >>>> http://p.sf.net/sfu/thinkgeek-promo >>>> _______________________________________________ >>>> Geowebcache-users mailing list >>>> Geo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >>>> >>>> >>>> >>> >> >> >> ------------------------------------------------------------------------------ >> ThinkGeek and WIRED's GeekDad team up for the Ultimate >> GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the >> lucky parental unit. See the prize list and enter to win: >> http://p.sf.net/sfu/thinkgeek-promo >> _______________________________________________ >> Geowebcache-users mailing list >> Geo...@li... >> https://lists.sourceforge.net/lists/listinfo/geowebcache-users >> >> > > -- View this message in context: http://old.nabble.com/GeoWebCache-directory-file-name-structure-tp28819540p28924991.html Sent from the GeoWebCache Users mailing list archive at Nabble.com. |
|
From: Arne K. <ar...@ti...> - 2010-06-18 12:30:44
|
Hi, remember that although you only have to rewrite one file, you should make a complete BlobStore implementation so that you can use Spring to plug it in. Take 20 minutes to set up Eclipse so you can run it in a debugger and see for yourself. prefix: Basepath provided by the parent FileBlobStore, e.g. /var/lib/geowebcache/tiles gridSetId: <name> of the gridSet element in geowebcache.xml parameters_id: Only if you use modifiable parameters (you don't), a combination keya=value&keyb=othervalue will be represented by some integer value. -1L if not applicable x: x-index in the cartesian plane of tiles, whether that's northing or something else depends on the SRS I left OpenGeo back in February, but Gabriel would know whether there are AMI plans for the suite. -Arne On 6/18/10 1:23 PM, samclemmens wrote: > Hi Arne, > > Tim is processing the data in EC2 per your recommendation above. He > estimates that it will take 30 hours to process national coverage for Great > Britain and then another 15 hours to create the spatial indexes... In the > meantime, I have a question about the parameters in the FilePathGenerator > class. (Some of these are obvious, but I just want to confirm.) Can you > tell me what the following are? > > - prefix > - tileIndex > - gridSetId > - parameters_id > - x (northing?) > - y (easting?) > - gridSetStr > > By the way, have you guys considered creating an AMI with the new OpenGeo > stack? I got most of the individual pieces working on a Fedora instance > last month, but it would have been a heck of a lot easier to just fire up an > AMI... > > Thanks, > > Peter > > > |