Thread: [linux-vrf-core] VRF number limitation
Status: Beta
Brought to you by:
jleu
From: Affandi I. <aff...@gm...> - 2005-07-13 06:36:33
|
Hi, I would like to ask about the limitation of the VRF, i can only configure it up to 8 VRF (while 0 it's already being used). From di archive, i saw the reply about the limitation of the VRF, but i don't quite understand what it means. Here is the previous posting regarding the limitation on VRF. ---------------------------------------------------------------------------= ------------------------------------------ Currently there is a hard limit of 8 VRFs (0-7). While working out bugs I have decided not to dynamically allocation all of the data structures requ= ired to keep track of TCP, UDP, and raw sockets per VRF. =20 Until I make the conversion to dynamically allocating per VRF data you can change the compile time limit by modifying VRF_MAX in linux/include/linux/rtnetlink.h ---------------------------------------------------------------------------= ----------------------------------------- The question is is there any parameter that i can change so i can have more that 8 VRF? I have tried to find VRF_MAX variable in rtnetlink.h, but no luck. Really appreciate your help. Thank you --=20 Regards, Affandi Indraji |
From: Affandi I. <aff...@gm...> - 2005-07-18 07:01:05
|
VGhhbmtzIHRvIFN0ZWZmZW4gYW5kIEt5dW5nIEh5dW4sCgpJJ20gYWJsZSB0byBzZXQgbW9yZSB0 aGFuIDggdnJmLCBhZnRlciBJIG1vZGlmeSBydG5ldGxpbmsuaC4gVGhlIG1heAp0aGF0IGkgY2Fu IGNvbmZpZ3VyZSBpcyAxOTAsIGlmIG1vcmUgdGhhbiB0aGF0IGkgY2FuJ3QgYm9vdCwgdGhlCnN5 c3RlbSBzYWlkIHRoYXQgImtlcm5lbCBwYW5pYyIgYW5kIHRoZSBzeXN0ZW0gaGFuZy4gSnVzdCBj dXJpb3VzLCBob3cKbWFueSB2cmYgaXMgdGhlIG1heCBudW1iZXIgdGhhdCBjYW4gYmUgY29uZmln dXJlZD8KCkFub3RoZXIgdGhpbmcgaXMsIGkgY2FuIGJvb3QgdXNpbmcgdGhlIGtlcm5lbCB3aXRo IDE5MCB2cmYsIGJ1dCBvbmNlIGkKcnVuIHNvbWV0aGluZyAoaSBydW4gInRhciB4amYiIG9uIGEg ZmlsZSksIGl0IHNhaWQgdGhhdCAic3lzdGVtIG91dCBvZgptZW1vcnkiIGFuZCBpIHdhcyBraWNr ZWQgb3V0IGZyb20gdGhlIFNTSCBzZXNzaW9uLCBvciBpZiBpJ20gaW4gdGhlCmNvbnNvbGUsIGkg d2lsbCBiZSBhdXRvbWF0aWNhbGx5IGxvZ291dC4gQW55IG9mIHlvdSBrbm93IGFib3V0IHRoZQps aW1pdGF0aW9uIG9mIHRoZSBtYXggbnVtYmVyIG9mIFZSRiB3aGVyZWJ5IHRoZSBzZXJ2ZXIgY2Fu IHN0aWxsIHJ1bgp3aXRoIHRoZSBzdGFibGUgY29uZGl0aW9uPwoKUmVhbGx5IGFwcHJlY2lhdGUg eW91ciBoZWxwLiBUaGFua3MuCgpSZWdhcmRzLApGYW5kaQoKT24gNy8xNC8wNSwgQ2hvIEt5dW5n IEh5dW4gPGRydW5rZW40bXlsaWZlQGhvdG1haWwuY29tPiB3cm90ZToKPiBUaGUgcGFyYW1ldGVy IFZSRl9NQVggaXMgaW5jbHVkZWQgaW4gcnRuZXRsaW5rLmggaWYgdGhlIGtlcm5lbAo+IHNvdXJj ZShoZWFkZXIpIGhhcyBiZWVuICJwYXRjaGVkIiB3aXRoIGxpbnV4LXZyZiAwLjkwMC4KPiAKPiBZ b3UgbWF5IGNoYW5nZSB0aGUgdmFsdWUgb2YgVlJGX01BWCh3aGljaCBpcyA3IGJ5IGRlZmF1bHQp IHRvIHdoaWNoZXZlciB1Cj4gbGlrZSwgYW5kIHRoZSByZS1jb21waWxlZCBrZXJuZWwgd2lsbCBi ZSBjYXBhYmxlIHRvIGNyZWF0ZSB2cmZzIGFzIG1hbnkgYXMKPiB1IGRlc2lnbmF0ZWQgaW4gcnRu ZXRsaW5rLmguCj4gCj4gaSBob3BlIGl0IGFuc3dlcnMuCj4gCj4gaGF2ZSBhIGdyZWF0IGRheS4K PiAKPiAKPiA9PT09PT09PT09PT09PT09PT09PT09PT09PQo+IMfgurnH0SDH9sDnsKEgwcG02SEK PiAKPiBkcnVua2VuNG15bGlmZUBob3RtYWlsLmNvbSAowbaw5sf2KQo+IAo+IAo+IAo+ID5Gcm9t OiBBZmZhbmRpIEluZHJhamkgPGFmZmFuZGkuaW5kcmFqaUBnbWFpbC5jb20+Cj4gPlJlcGx5LVRv OiBBZmZhbmRpIEluZHJhamkgPGFmZmFuZGkuaW5kcmFqaUBnbWFpbC5jb20+Cj4gPlRvOiBsaW51 eC12cmYtY29yZUBsaXN0cy5zb3VyY2Vmb3JnZS5uZXQKPiA+U3ViamVjdDogW2xpbnV4LXZyZi1j b3JlXSBWUkYgbnVtYmVyIGxpbWl0YXRpb24KPiA+RGF0ZTogV2VkLCAxMyBKdWwgMjAwNSAxNDoz NjoxNCArMDgwMAo+ID4KPiA+SGksCj4gPgo+ID5JIHdvdWxkIGxpa2UgdG8gYXNrIGFib3V0IHRo ZSBsaW1pdGF0aW9uIG9mIHRoZSBWUkYsIGkgY2FuIG9ubHkKPiA+Y29uZmlndXJlIGl0IHVwIHRv IDggVlJGICh3aGlsZSAwIGl0J3MgYWxyZWFkeSBiZWluZyB1c2VkKS4gRnJvbSBkaQo+ID5hcmNo aXZlLCBpIHNhdyB0aGUgcmVwbHkgYWJvdXQgdGhlIGxpbWl0YXRpb24gb2YgdGhlIFZSRiwgYnV0 IGkgZG9uJ3QKPiA+cXVpdGUgdW5kZXJzdGFuZCB3aGF0IGl0IG1lYW5zLiBIZXJlIGlzIHRoZSBw cmV2aW91cyBwb3N0aW5nIHJlZ2FyZGluZwo+ID50aGUgbGltaXRhdGlvbiBvbiBWUkYuCj4gPi0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQo+IAo+ID5DdXJyZW50bHkgdGhlcmUgaXMgYSBoYXJkIGxpbWl0IG9mIDggVlJGcyAoMC03KS4g IFdoaWxlIHdvcmtpbmcgb3V0IGJ1Z3MgSQo+ID4gIGhhdmUgZGVjaWRlZCBub3QgdG8gZHluYW1p Y2FsbHkgYWxsb2NhdGlvbiBhbGwgb2YgdGhlIGRhdGEgc3RydWN0dXJlcwo+IHJlcXVpcmVkCj4g PiAgdG8ga2VlcCB0cmFjayBvZiBUQ1AsIFVEUCwgYW5kIHJhdyBzb2NrZXRzIHBlciBWUkYuCj4g Pgo+ID4gIFVudGlsIEkgbWFrZSB0aGUgY29udmVyc2lvbiB0byBkeW5hbWljYWxseSBhbGxvY2F0 aW5nIHBlciBWUkYgZGF0YSB5b3UKPiBjYW4KPiA+ICBjaGFuZ2UgdGhlIGNvbXBpbGUgdGltZSBs aW1pdCBieSBtb2RpZnlpbmcgVlJGX01BWCBpbgo+ID4gIGxpbnV4L2luY2x1ZGUvbGludXgvcnRu ZXRsaW5rLmgKPiA+LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0KPiAKPiA+VGhlIHF1ZXN0aW9uIGlzIGlzIHRoZXJlIGFueSBwYXJhbWV0 ZXIgdGhhdCBpIGNhbiBjaGFuZ2Ugc28gaSBjYW4gaGF2ZQo+ID5tb3JlIHRoYXQgOCBWUkY/IEkg aGF2ZSB0cmllZCB0byBmaW5kIFZSRl9NQVggdmFyaWFibGUgaW4gcnRuZXRsaW5rLmgsCj4gPmJ1 dCBubyBsdWNrLgo+ID4KPiA+UmVhbGx5IGFwcHJlY2lhdGUgeW91ciBoZWxwLiBUaGFuayB5b3UK PiA+Cj4gPi0tCj4gPlJlZ2FyZHMsCj4gPkFmZmFuZGkgSW5kcmFqaQo+ID4KPiA+Cj4gPi0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KPiA+VGhp cyBTRi5OZXQgZW1haWwgaXMgc3BvbnNvcmVkIGJ5IHRoZSAnRG8gTW9yZSBXaXRoIER1YWwhJyB3 ZWJpbmFyCj4gaGFwcGVuaW5nCj4gPkp1bHkgMTQgYXQgOGFtIFBEVC8xMWFtIEVEVC4gV2UgaW52 aXRlIHlvdSB0byBleHBsb3JlIHRoZSBsYXRlc3QgaW4gZHVhbAo+ID5jb3JlIGFuZCBkdWFsIGdy YXBoaWNzIHRlY2hub2xvZ3kgYXQgdGhpcyBmcmVlIG9uZSBob3VyIGV2ZW50IGhvc3RlZCBieQo+ IEhQLAo+ID5BTUQsIGFuZCBOVklESUEuICBUbyByZWdpc3RlciB2aXNpdCBodHRwOi8vd3d3Lmhw LmNvbS9nby9kdWFsd2ViaW5hcgo+ID5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwo+ID5saW51eC12cmYtY29yZSBtYWlsaW5nIGxpc3QKPiA+bGludXgtdnJm LWNvcmVAbGlzdHMuc291cmNlZm9yZ2UubmV0Cj4gPmh0dHBzOi8vbGlzdHMuc291cmNlZm9yZ2Uu bmV0L2xpc3RzL2xpc3RpbmZvL2xpbnV4LXZyZi1jb3JlCj4gCj4gCj4gCgoKLS0gClJlZ2FyZHMs CkFmZmFuZGkgSW5kcmFqaQo= |
From: Steffen M. <sm...@us...> - 2005-07-18 10:59:19
|
Hello Affandi, On Mon, 18 Jul 2005, Affandi Indraji wrote: > I'm able to set more than 8 vrf, after I modify rtnetlink.h. The maxthat > i can configure is 190, if more than that i can't boot, thesystem said > that "kernel panic" and the system hang. Just curious, howmany vrf is > the max number that can be configured? In theory it should be just the maximum value that "unsigned char vrf" can hold. So I would assume at least 255. > Another thing is, i can boot using the kernel with 190 vrf, but once > irun something (i run "tar xjf" on a file), it said that "system out > ofmemory" and i was kicked out from the SSH session, or if i'm in > theconsole, i will be automatically logout. Any of you know about > thelimitation of the max number of VRF whereby the server can still > runwith the stable condition? I don't know of any other limitation than the above mentioned. VRF_MAX=65 works for me. However, I have to admit that I use VRF 0.100 with Linux 2.4.24 plus a bunch of own fixes in order to adapt the patches to kernel 2.4.24 and then to avoid incidental kernel oopses. The oopses were due to in kernel memory management of the VRF extension. You might see similar oopses in your kernel logs, even in the case when it seems that just a user space process terminated on some activity. Regards, Steffen. > On 7/14/05, Cho Kyung Hyun <dru...@ho...> wrote:> The > parameter VRF_MAX is included in rtnetlink.h if the kernel> > source(header) has been "patched" with linux-vrf 0.900.> > You may > change the value of VRF_MAX(which is 7 by default) to whichever u> like, > and the re-compiled kernel will be capable to create vrfs as many as> u > designated in rtnetlink.h.> > i hope it answers.> > have a great day.> > > >From: Affandi Indraji <aff...@gm...> > >Date: Wed, 13 Jul 2005 14:36:14 +0800 > >> >Hi,> >> >I would like to ask about the > limitation of the VRF, i can only> >configure it up to 8 VRF (while 0 > it's already being used). From di> >archive, i saw the reply about the > limitation of the VRF, but i don't> >quite understand what it means. > Here is the previous posting regarding> >the limitation on VRF.> > >----------------------------------------------------------------------- > > >Currently there is a hard limit of 8 VRFs (0-7). While working out > bugs I> > have decided not to dynamically allocation all of the data > structures> required> > to keep track of TCP, UDP, and raw sockets per > VRF.> >> > Until I make the conversion to dynamically allocating per VRF > data you> can> > change the compile time limit by modifying VRF_MAX in> > > linux/include/linux/rtnetlink.h> > >----------------------------------------------------------------------- > > >The question is is there any parameter that i can change so i can > have> >more that 8 VRF? I have tried to find VRF_MAX variable in > rtnetlink.h,> >but no luck.> >> >Really appreciate your help. Thank you> > >> >--> >Regards,> >Affandi Indraji> >> >> |
From: Cho K. H. <dru...@ho...> - 2005-07-19 02:41:33
|
"possibly" the problem of the limit of the kernel stack size.. as linux-vrf 0.900 patch lets the kernel keep routing table(fib_table)s as many as VRF_MAX and other structures also. Linux kernel really has a hard(and low) limit on its stack size and when the stack overflow happens, it is possible to see kernel panic.. i'm really not sure about it as i tried only 8 with my VRF_MAX. ========================== 행복한 현재가 좋다! dru...@ho... (조경현) >From: Affandi Indraji <aff...@gm...> >Reply-To: Affandi Indraji <aff...@gm...> >To: lin...@li... >Subject: Re: [linux-vrf-core] VRF number limitation >Date: Mon, 18 Jul 2005 15:00:56 +0800 > >Thanks to Steffen and Kyung Hyun, > >I'm able to set more than 8 vrf, after I modify rtnetlink.h. The max >that i can configure is 190, if more than that i can't boot, the >system said that "kernel panic" and the system hang. Just curious, how >many vrf is the max number that can be configured? > >Another thing is, i can boot using the kernel with 190 vrf, but once i >run something (i run "tar xjf" on a file), it said that "system out of >memory" and i was kicked out from the SSH session, or if i'm in the >console, i will be automatically logout. Any of you know about the >limitation of the max number of VRF whereby the server can still run >with the stable condition? > >Really appreciate your help. Thanks. > >Regards, >Fandi > >On 7/14/05, Cho Kyung Hyun <dru...@ho...> wrote: > > The parameter VRF_MAX is included in rtnetlink.h if the kernel > > source(header) has been "patched" with linux-vrf 0.900. > > > > You may change the value of VRF_MAX(which is 7 by default) to whichever u > > like, and the re-compiled kernel will be capable to create vrfs as many as > > u designated in rtnetlink.h. > > > > i hope it answers. > > > > have a great day. > > > > > > ========================== > > 행복한 현재가 좋다! > > > > dru...@ho... (조경현) > > > > > > > > >From: Affandi Indraji <aff...@gm...> > > >Reply-To: Affandi Indraji <aff...@gm...> > > >To: lin...@li... > > >Subject: [linux-vrf-core] VRF number limitation > > >Date: Wed, 13 Jul 2005 14:36:14 +0800 > > > > > >Hi, > > > > > >I would like to ask about the limitation of the VRF, i can only > > >configure it up to 8 VRF (while 0 it's already being used). From di > > >archive, i saw the reply about the limitation of the VRF, but i don't > > >quite understand what it means. Here is the previous posting regarding > > >the limitation on VRF. > > >--------------------------------------------------------------------------------------------------------------------- > > > > >Currently there is a hard limit of 8 VRFs (0-7). While working out bugs I > > > have decided not to dynamically allocation all of the data structures > > required > > > to keep track of TCP, UDP, and raw sockets per VRF. > > > > > > Until I make the conversion to dynamically allocating per VRF data you > > can > > > change the compile time limit by modifying VRF_MAX in > > > linux/include/linux/rtnetlink.h > > >-------------------------------------------------------------------------------------------------------------------- > > > > >The question is is there any parameter that i can change so i can have > > >more that 8 VRF? I have tried to find VRF_MAX variable in rtnetlink.h, > > >but no luck. > > > > > >Really appreciate your help. Thank you > > > > > >-- > > >Regards, > > >Affandi Indraji > > > > > > > > >------------------------------------------------------- > > >This SF.Net email is sponsored by the 'Do More With Dual!' webinar > > happening > > >July 14 at 8am PDT/11am EDT. We invite you to explore the latest in dual > > >core and dual graphics technology at this free one hour event hosted by > > HP, > > >AMD, and NVIDIA. To register visit http://www.hp.com/go/dualwebinar > > >_______________________________________________ > > >linux-vrf-core mailing list > > >lin...@li... > > >https://lists.sourceforge.net/lists/listinfo/linux-vrf-core > > > > > > > > >-- >Regards, >Affandi Indraji |
From: Affandi I. <aff...@gm...> - 2005-07-19 03:11:44
|
SGkgR3V5cywKCnRoYW5rIHlvdSBmb3IgdGhlIHJlc3BvbmQsIGN1cnJlbnRseSBJJ20gdHJ5aW5n IHRvIHJ1biB3aXRoIDEwMCB2cmYKYW5kIGRvIHNvbWUgc3RhYmlsaXR5IHRlc3QuCgpXaWxsIGxl dCB5b3UgZ3V5cyB1cGRhdGVkIGlmIEkgZmluZCBhbnkgaW50ZXJlc3RpbmcgZmluZGluZy4KClJn ZHMsCkFmZmFuZGkKCk9uIDcvMTkvMDUsIENobyBLeXVuZyBIeXVuIDxkcnVua2VuNG15bGlmZUBo b3RtYWlsLmNvbT4gd3JvdGU6Cj4gCj4gInBvc3NpYmx5IiB0aGUgcHJvYmxlbSBvZiB0aGUgbGlt aXQgb2YgdGhlIGtlcm5lbCBzdGFjayBzaXplLi4gYXMgbGludXgtdnJmCj4gMC45MDAgcGF0Y2gg bGV0cyB0aGUga2VybmVsIGtlZXAgcm91dGluZyB0YWJsZShmaWJfdGFibGUpcyBhcyBtYW55IGFz Cj4gVlJGX01BWCBhbmQgb3RoZXIgc3RydWN0dXJlcyBhbHNvLgo+IAo+IExpbnV4IGtlcm5lbCBy ZWFsbHkgaGFzIGEgaGFyZChhbmQgbG93KSBsaW1pdCBvbiBpdHMgc3RhY2sgc2l6ZSBhbmQgd2hl bgo+IHRoZSBzdGFjayBvdmVyZmxvdyBoYXBwZW5zLCBpdCBpcyBwb3NzaWJsZSB0byBzZWUga2Vy bmVsIHBhbmljLi4KPiAKPiBpJ20gcmVhbGx5IG5vdCBzdXJlIGFib3V0IGl0IGFzIGkgdHJpZWQg b25seSA4IHdpdGggbXkgVlJGX01BWC4KPiAKPiAKPiA9PT09PT09PT09PT09PT09PT09PT09PT09 PQo+IMfgurnH0SDH9sDnsKEgwcG02SEKPiAKPiBkcnVua2VuNG15bGlmZUBob3RtYWlsLmNvbSAo wbaw5sf2KQo+IAo+IAo+IAo+ID5Gcm9tOiBBZmZhbmRpIEluZHJhamkgPGFmZmFuZGkuaW5kcmFq aUBnbWFpbC5jb20+Cj4gPlJlcGx5LVRvOiBBZmZhbmRpIEluZHJhamkgPGFmZmFuZGkuaW5kcmFq aUBnbWFpbC5jb20+Cj4gPlRvOiBsaW51eC12cmYtY29yZUBsaXN0cy5zb3VyY2Vmb3JnZS5uZXQK PiA+U3ViamVjdDogUmU6IFtsaW51eC12cmYtY29yZV0gVlJGIG51bWJlciBsaW1pdGF0aW9uCj4g PkRhdGU6IE1vbiwgMTggSnVsIDIwMDUgMTU6MDA6NTYgKzA4MDAKPiA+Cj4gPlRoYW5rcyB0byBT dGVmZmVuIGFuZCBLeXVuZyBIeXVuLAo+ID4KPiA+SSdtIGFibGUgdG8gc2V0IG1vcmUgdGhhbiA4 IHZyZiwgYWZ0ZXIgSSBtb2RpZnkgcnRuZXRsaW5rLmguIFRoZSBtYXgKPiA+dGhhdCBpIGNhbiBj b25maWd1cmUgaXMgMTkwLCBpZiBtb3JlIHRoYW4gdGhhdCBpIGNhbid0IGJvb3QsIHRoZQo+ID5z eXN0ZW0gc2FpZCB0aGF0ICJrZXJuZWwgcGFuaWMiIGFuZCB0aGUgc3lzdGVtIGhhbmcuIEp1c3Qg Y3VyaW91cywgaG93Cj4gPm1hbnkgdnJmIGlzIHRoZSBtYXggbnVtYmVyIHRoYXQgY2FuIGJlIGNv bmZpZ3VyZWQ/Cj4gPgo+ID5Bbm90aGVyIHRoaW5nIGlzLCBpIGNhbiBib290IHVzaW5nIHRoZSBr ZXJuZWwgd2l0aCAxOTAgdnJmLCBidXQgb25jZSBpCj4gPnJ1biBzb21ldGhpbmcgKGkgcnVuICJ0 YXIgeGpmIiBvbiBhIGZpbGUpLCBpdCBzYWlkIHRoYXQgInN5c3RlbSBvdXQgb2YKPiA+bWVtb3J5 IiBhbmQgaSB3YXMga2lja2VkIG91dCBmcm9tIHRoZSBTU0ggc2Vzc2lvbiwgb3IgaWYgaSdtIGlu IHRoZQo+ID5jb25zb2xlLCBpIHdpbGwgYmUgYXV0b21hdGljYWxseSBsb2dvdXQuIEFueSBvZiB5 b3Uga25vdyBhYm91dCB0aGUKPiA+bGltaXRhdGlvbiBvZiB0aGUgbWF4IG51bWJlciBvZiBWUkYg d2hlcmVieSB0aGUgc2VydmVyIGNhbiBzdGlsbCBydW4KPiA+d2l0aCB0aGUgc3RhYmxlIGNvbmRp dGlvbj8KPiA+Cj4gPlJlYWxseSBhcHByZWNpYXRlIHlvdXIgaGVscC4gVGhhbmtzLgo+ID4KPiA+ UmVnYXJkcywKPiA+RmFuZGkKPiA+Cj4gPk9uIDcvMTQvMDUsIENobyBLeXVuZyBIeXVuIDxkcnVu a2VuNG15bGlmZUBob3RtYWlsLmNvbT4gd3JvdGU6Cj4gPiA+IFRoZSBwYXJhbWV0ZXIgVlJGX01B WCBpcyBpbmNsdWRlZCBpbiBydG5ldGxpbmsuaCBpZiB0aGUga2VybmVsCj4gPiA+IHNvdXJjZSho ZWFkZXIpIGhhcyBiZWVuICJwYXRjaGVkIiB3aXRoIGxpbnV4LXZyZiAwLjkwMC4KPiA+ID4KPiA+ ID4gWW91IG1heSBjaGFuZ2UgdGhlIHZhbHVlIG9mIFZSRl9NQVgod2hpY2ggaXMgNyBieSBkZWZh dWx0KSB0byB3aGljaGV2ZXIKPiB1Cj4gPiA+IGxpa2UsIGFuZCB0aGUgcmUtY29tcGlsZWQga2Vy bmVsIHdpbGwgYmUgY2FwYWJsZSB0byBjcmVhdGUgdnJmcyBhcyBtYW55Cj4gYXMKPiA+ID4gdSBk ZXNpZ25hdGVkIGluIHJ0bmV0bGluay5oLgo+ID4gPgo+ID4gPiBpIGhvcGUgaXQgYW5zd2Vycy4K PiA+ID4KPiA+ID4gaGF2ZSBhIGdyZWF0IGRheS4KPiA+ID4KPiA+ID4KPiA+ID4gPT09PT09PT09 PT09PT09PT09PT09PT09PT0KPiA+ID4gx+C6ucfRIMf2wOewoSDBwbTZIQo+ID4gPgo+ID4gPiBk cnVua2VuNG15bGlmZUBob3RtYWlsLmNvbSAowbaw5sf2KQo+ID4gPgo+ID4gPgo+ID4gPgo+ID4g PiA+RnJvbTogQWZmYW5kaSBJbmRyYWppIDxhZmZhbmRpLmluZHJhamlAZ21haWwuY29tPgo+ID4g PiA+UmVwbHktVG86IEFmZmFuZGkgSW5kcmFqaSA8YWZmYW5kaS5pbmRyYWppQGdtYWlsLmNvbT4K PiA+ID4gPlRvOiBsaW51eC12cmYtY29yZUBsaXN0cy5zb3VyY2Vmb3JnZS5uZXQKPiA+ID4gPlN1 YmplY3Q6IFtsaW51eC12cmYtY29yZV0gVlJGIG51bWJlciBsaW1pdGF0aW9uCj4gPiA+ID5EYXRl OiBXZWQsIDEzIEp1bCAyMDA1IDE0OjM2OjE0ICswODAwCj4gPiA+ID4KPiA+ID4gPkhpLAo+ID4g PiA+Cj4gPiA+ID5JIHdvdWxkIGxpa2UgdG8gYXNrIGFib3V0IHRoZSBsaW1pdGF0aW9uIG9mIHRo ZSBWUkYsIGkgY2FuIG9ubHkKPiA+ID4gPmNvbmZpZ3VyZSBpdCB1cCB0byA4IFZSRiAod2hpbGUg MCBpdCdzIGFscmVhZHkgYmVpbmcgdXNlZCkuIEZyb20gZGkKPiA+ID4gPmFyY2hpdmUsIGkgc2F3 IHRoZSByZXBseSBhYm91dCB0aGUgbGltaXRhdGlvbiBvZiB0aGUgVlJGLCBidXQgaSBkb24ndAo+ ID4gPiA+cXVpdGUgdW5kZXJzdGFuZCB3aGF0IGl0IG1lYW5zLiBIZXJlIGlzIHRoZSBwcmV2aW91 cyBwb3N0aW5nIHJlZ2FyZGluZwo+ID4gPiA+dGhlIGxpbWl0YXRpb24gb24gVlJGLgo+ID4gPgo+ ID4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0KPiAKPiA+ID4KPiA+ID4gPkN1cnJlbnRseSB0aGVyZSBpcyBhIGhhcmQgbGltaXQgb2Yg OCBWUkZzICgwLTcpLiAgV2hpbGUgd29ya2luZyBvdXQKPiBidWdzIEkKPiA+ID4gPiAgaGF2ZSBk ZWNpZGVkIG5vdCB0byBkeW5hbWljYWxseSBhbGxvY2F0aW9uIGFsbCBvZiB0aGUgZGF0YQo+IHN0 cnVjdHVyZXMKPiA+ID4gcmVxdWlyZWQKPiA+ID4gPiAgdG8ga2VlcCB0cmFjayBvZiBUQ1AsIFVE UCwgYW5kIHJhdyBzb2NrZXRzIHBlciBWUkYuCj4gPiA+ID4KPiA+ID4gPiAgVW50aWwgSSBtYWtl IHRoZSBjb252ZXJzaW9uIHRvIGR5bmFtaWNhbGx5IGFsbG9jYXRpbmcgcGVyIFZSRiBkYXRhCj4g eW91Cj4gPiA+IGNhbgo+ID4gPiA+ICBjaGFuZ2UgdGhlIGNvbXBpbGUgdGltZSBsaW1pdCBieSBt b2RpZnlpbmcgVlJGX01BWCBpbgo+ID4gPiA+ICBsaW51eC9pbmNsdWRlL2xpbnV4L3J0bmV0bGlu ay5oCj4gPiA+Cj4gPi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tCj4gCj4gPiA+Cj4gPiA+ID5UaGUgcXVlc3Rpb24gaXMgaXMgdGhlcmUg YW55IHBhcmFtZXRlciB0aGF0IGkgY2FuIGNoYW5nZSBzbyBpIGNhbiBoYXZlCj4gPiA+ID5tb3Jl IHRoYXQgOCBWUkY/IEkgaGF2ZSB0cmllZCB0byBmaW5kIFZSRl9NQVggdmFyaWFibGUgaW4gcnRu ZXRsaW5rLmgsCj4gPiA+ID5idXQgbm8gbHVjay4KPiA+ID4gPgo+ID4gPiA+UmVhbGx5IGFwcHJl Y2lhdGUgeW91ciBoZWxwLiBUaGFuayB5b3UKPiA+ID4gPgo+ID4gPiA+LS0KPiA+ID4gPlJlZ2Fy ZHMsCj4gPiA+ID5BZmZhbmRpIEluZHJhamkKPiA+ID4gPgo+ID4gPiA+Cj4gPiA+ID4tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCj4gPiA+ID5U aGlzIFNGLk5ldCBlbWFpbCBpcyBzcG9uc29yZWQgYnkgdGhlICdEbyBNb3JlIFdpdGggRHVhbCEn IHdlYmluYXIKPiA+ID4gaGFwcGVuaW5nCj4gPiA+ID5KdWx5IDE0IGF0IDhhbSBQRFQvMTFhbSBF RFQuIFdlIGludml0ZSB5b3UgdG8gZXhwbG9yZSB0aGUgbGF0ZXN0IGluCj4gZHVhbAo+ID4gPiA+ Y29yZSBhbmQgZHVhbCBncmFwaGljcyB0ZWNobm9sb2d5IGF0IHRoaXMgZnJlZSBvbmUgaG91ciBl dmVudCBob3N0ZWQKPiBieQo+ID4gPiBIUCwKPiA+ID4gPkFNRCwgYW5kIE5WSURJQS4gIFRvIHJl Z2lzdGVyIHZpc2l0IGh0dHA6Ly93d3cuaHAuY29tL2dvL2R1YWx3ZWJpbmFyCj4gPiA+ID5fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+ID4gPiA+bGludXgt dnJmLWNvcmUgbWFpbGluZyBsaXN0Cj4gPiA+ID5saW51eC12cmYtY29yZUBsaXN0cy5zb3VyY2Vm b3JnZS5uZXQKPiA+ID4gPmh0dHBzOi8vbGlzdHMuc291cmNlZm9yZ2UubmV0L2xpc3RzL2xpc3Rp bmZvL2xpbnV4LXZyZi1jb3JlCj4gPiA+Cj4gPiA+Cj4gPiA+Cj4gPgo+ID4KPiA+LS0KPiA+UmVn YXJkcywKPiA+QWZmYW5kaSBJbmRyYWppCj4gCj4gCj4gCj4gCj4gLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+IFNGLk5ldCBlbWFpbCBpcyBz cG9uc29yZWQgYnk6IERpc2NvdmVyIEVhc3kgTGludXggTWlncmF0aW9uIFN0cmF0ZWdpZXMKPiBm cm9tIElCTS4gRmluZCBzaW1wbGUgdG8gZm9sbG93IFJvYWRtYXBzLCBzdHJhaWdodGZvcndhcmQg YXJ0aWNsZXMsCj4gaW5mb3JtYXRpdmUgV2ViY2FzdHMgYW5kIG1vcmUhIEdldCBldmVyeXRoaW5n IHlvdSBuZWVkIHRvIGdldCB1cCB0bwo+IHNwZWVkLCBmYXN0LiBodHRwOi8vYWRzLm9zZG4uY29t Lz9hZF9pZD03NDc3JmFsbG9jX2lkPTE2NDkyJm9wPWNsaWNrCj4gX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBsaW51eC12cmYtY29yZSBtYWlsaW5nIGxp c3QKPiBsaW51eC12cmYtY29yZUBsaXN0cy5zb3VyY2Vmb3JnZS5uZXQKPiBodHRwczovL2xpc3Rz LnNvdXJjZWZvcmdlLm5ldC9saXN0cy9saXN0aW5mby9saW51eC12cmYtY29yZQo+IAoKCi0tIApS ZWdhcmRzLApBZmZhbmRpIEluZHJhamkK |
From: Affandi I. <aff...@gm...> - 2005-07-22 04:36:22
|
Hi all, I experience something strange if I configure different interface using the same IP address (on different VRF) ip addr 27: eth1.223: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue vrf 23 link/ether 00:02:55:7b:20:75 brd ff:ff:ff:ff:ff:ff inet 10.11.3.2/24 scope global eth1.223 28: eth1.224: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue vrf 24 link/ether 00:02:55:7b:20:75 brd ff:ff:ff:ff:ff:ff inet 10.11.3.2/24 scope global eth1.224 ip route 10.11.3.0/24 dev eth1.223 proto kernel scope link src 10.11.3.2 vrf 23 default via 10.11.3.1 dev eth1.223 vrf 23 10.11.3.0/24 dev eth1.224 proto kernel scope link src 10.11.3.2 vrf 24 default via 10.11.3.1 dev eth1.224 vrf 24 ----------------------------------------------- test# chvrf 24 ping 10.11.3.1 PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. 64 bytes from 10.11.3.1: icmp_seq=3D1 ttl=3D255 time=3D1.04 ms 64 bytes from 10.11.3.1: icmp_seq=3D2 ttl=3D255 time=3D0.387 ms 64 bytes from 10.11.3.1: icmp_seq=3D3 ttl=3D255 time=3D0.503 ms test# chvrf 23 ping 10.11.3.1 PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. --- 10.11.3.1 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 5999ms ------------------------------------------ as you can see, I can ping vrf 24 but not vrf 23. BUT, if I wait for a while I can ping vrf 23 but not vrf 24. (see below) ------------------------------------------ test# chvrf 23 ping 10.11.3.1 PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. 64 bytes from 10.11.3.1: icmp_seq=3D1 ttl=3D255 time=3D0.811 ms 64 bytes from 10.11.3.1: icmp_seq=3D2 ttl=3D255 time=3D0.369 ms 64 bytes from 10.11.3.1: icmp_seq=3D3 ttl=3D255 time=3D0.359 ms test# chvrf 24 ping 10.11.3.1 PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. --- 10.11.3.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms --------------------- Is there any limitation that we can't use the same IP address for different vrf? The reason I want to try it is to support the MPLS, using the same IP address for different VRF. /affandi |
From: James R. L. <jl...@mi...> - 2005-07-22 05:27:02
|
Hmmm. very interesting. My guess is that the route cache isn't keeping VRF entries seperate. If so it's definitly a bug. To verify look in /proc/net/rt_cache* and see what the entry look like for the dest your ping= ing. On Fri, Jul 22, 2005 at 12:26:20PM +0800, Affandi Indraji wrote: > Hi all, >=20 > I experience something strange if I configure different interface > using the same IP address (on different VRF) >=20 > ip addr > 27: eth1.223: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue vrf 23 > link/ether 00:02:55:7b:20:75 brd ff:ff:ff:ff:ff:ff > inet 10.11.3.2/24 scope global eth1.223 > 28: eth1.224: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue vrf 24 > link/ether 00:02:55:7b:20:75 brd ff:ff:ff:ff:ff:ff > inet 10.11.3.2/24 scope global eth1.224 >=20 > ip route > 10.11.3.0/24 dev eth1.223 proto kernel scope link src 10.11.3.2 vrf 23 > default via 10.11.3.1 dev eth1.223 vrf 23 > 10.11.3.0/24 dev eth1.224 proto kernel scope link src 10.11.3.2 vrf 24 > default via 10.11.3.1 dev eth1.224 vrf 24 > ----------------------------------------------- > test# chvrf 24 ping 10.11.3.1 > PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. > 64 bytes from 10.11.3.1: icmp_seq=3D1 ttl=3D255 time=3D1.04 ms > 64 bytes from 10.11.3.1: icmp_seq=3D2 ttl=3D255 time=3D0.387 ms > 64 bytes from 10.11.3.1: icmp_seq=3D3 ttl=3D255 time=3D0.503 ms >=20 > test# chvrf 23 ping 10.11.3.1 > PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. > --- 10.11.3.1 ping statistics --- > 7 packets transmitted, 0 received, 100% packet loss, time 5999ms > ------------------------------------------ > as you can see, I can ping vrf 24 but not vrf 23. BUT, if I wait for a > while I can ping vrf 23 but not vrf 24. (see below) > ------------------------------------------ > test# chvrf 23 ping 10.11.3.1 > PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. > 64 bytes from 10.11.3.1: icmp_seq=3D1 ttl=3D255 time=3D0.811 ms > 64 bytes from 10.11.3.1: icmp_seq=3D2 ttl=3D255 time=3D0.369 ms > 64 bytes from 10.11.3.1: icmp_seq=3D3 ttl=3D255 time=3D0.359 ms >=20 > test# chvrf 24 ping 10.11.3.1 > PING 10.11.3.1 (10.11.3.1) 56(84) bytes of data. >=20 > --- 10.11.3.1 ping statistics --- > 2 packets transmitted, 0 received, 100% packet loss, time 999ms > --------------------- >=20 > Is there any limitation that we can't use the same IP address for > different vrf? The reason I want to try it is to support the MPLS, > using the same IP address for different VRF. >=20 > /affandi >=20 >=20 > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_idt77&alloc_id=16492&op?k > _______________________________________________ > linux-vrf-core mailing list > lin...@li... > https://lists.sourceforge.net/lists/listinfo/linux-vrf-core --=20 James R. Leu jl...@mi... |