> -----Original Message-----
> From: akshay st [mailto:email@example.com
> Sent: mercredi 24 avril 2013 18:45
> To: Rossier Daniel; firstname.lastname@example.org
> Subject: Re: [Embeddedxen-devel] Regarding DOMU Xen-ARM
> Thanks Daniel for the reply.
> I have
directly downloaded Embedded Xen 2.1 from SourceForge. Can you
> please point me to git path? i couldn't find one.
Sure, you can find all information here: http://sourceforge.net/p/embeddedxen/code/ci/master/tree/
The cloning line is: git clone git://git.code.sf.net/p/embeddedxen/code embeddedxen-code
> Any issues with source forge code repository?
No, the code is working, but we have reworked quite a lot of things and fixed various issues in the head git.
So, I recommend to work on the git version.
And you will also find some additional domU (especially 3.4.6-domU) which is more recent and can work
with a complete rootfs (rootfs is also available for download : http://sourceforge.net/projects/embeddedxen/files/rootfs/
will also upload a light qt-enabled rootfs).
By the way, what domU are you using ?
> I have got basic DOMu without any drivers but simulated uart in Linux (for
> the busybox application)working with some hacks. Basically i call
> in multiple places of devicemaps_init. For some reason, this solved the issue,
> Although i don't know how it got solved.
> I am thinking the problem may be because of following
> Basically in the code it clears Page table entry 0xffff0000, after that there is
> flush_cache_range and then my disable_IRQ, local_tlb_flush_all. Now even
> though flush_cache_range clears cache , if interrupt occurs if TLB is valid it can
> populate the cache after TLB Walk. However if TLB is not valid then it may
> hang. I am thinking solution can
be to disable irq before we clear pagetable
> entry and reenable after we copy the vectors properly. I will try this
> sometime tomorrow. Does this theory make any logical sense?
Definitively. If you call flush_cashe_range, make sure to have IRQs disabled before otherwise
you will get possible inconsistencies.
Back to the current code (2.6.26-domU), IRQs are disabled anyway when devicemaps_init() is called.
Maybe, it can be worth to summarize the different steps we need to take care in devicemaps_init().
Let's examine the situation at the beginning of devicemaps_init():
The current running vector page (at 0xffff0000 location) is actually a simple mapping of the original hypervisor vector page
but in the domU virtual address space. We can't simply leave this page as such for domU because there are some other
stuff used by user helpers of the guest domain (you have currently information used
by dom0). For sure, domU
will need to use its own helpers data. Furthermore, some additional cache flush mapping can reside in this page as well,
which is partly controlled by the guest (not only by the hypervisor).
It means that domU must have its own vector page, even if the interrupt vectors remain the same because the ISRs
still belong to the hypervisor.
So, we start doing a copy of the vector page in a new page allocated to domU, thus preserving the hypervisor vectors.
Then, we also allocate a guest vector page (another domU-private page) which will store the *real* vectors of domU
used during the upcall path to call the right handlers. However, this guest vector page must be known by the domU kernel, but
is not resident at 0xffff0000.
Finally, we are mapping the true vector page (0xffff0000) on the duplicated page containing the hypervisor vectors in order
to make this vector page private to the domU address
And doing a flush which, in our case as you saw, leads to an hypercall which in turn leads to re-enable IRQs during the upcall
(so, after the hypervisor has done the flush!).
That's why all the IRQs vector machinery has to be set-up correctly before descending into the hypervisor.
> 1 more thing if i mail directly (of course will keep embeddedxen mailling list in
> CC)to your email id will it work?
> Warm Regards,
> ----- Original Message -----
> From: Rossier Daniel <Daniel.Rossier@heig-vd.ch
> To: Rossier Daniel <Daniel.Rossier@heig-vd.ch
>; akshay st
> Sent: Wednesday, 24 April 2013 4:54 PM
> Subject: RE: [Embeddedxen-devel] Regarding DOMU Xen-ARM
> By the way, are you using the release from files available on Sourceforge or
> from git directly?
> We strongly suggest that for now you are working with the last git source
> We plan to make a new release (with tar.gz files) in the middle of the Year.
> In the git version, we're doing nearly daily updates.
> > -----Original Message-----
> > From: Rossier Daniel [mailto:Daniel.Rossier@heig-vd.ch
> > Sent: mercredi 24 avril 2013 13:01
> > To: akshay st; email@example.com
> > Subject: Re: [Embeddedxen-devel] Regarding DOMU Xen-ARM
> > Hi Akshay,
> > > -----Original Message-----
> > > From: akshay st [mailto:firstname.lastname@example.org
> > > Sent: mardi 23 avril 2013 18:46
> > > To: email@example.com
> > > Subject: [Embeddedxen-devel] Regarding DOMU Xen-ARM
> > >
> > > Hi,
> > > I took Embedded Xen 2.1 sources for ARM, i am modifiying for my
> > > board, i could understand and bring up DOM0 without issues, However i
> > > have some doubts regarding DOMU w.r.t devicemaps_init(),
> > > Here we wont
> > > clear oxFFFF0000 for high vectors assuming that during upcall ISR may
> > > come(Because of flush), However in my implementation i dont use XEN
> > > hyper calls For TLb/Cache flush , i take the same implementation as
> > > DOM0(which i guess it shd be ok?). Basically i keep devicemaps_init()
> > > implementation same as DOM0, With this when i run local_flush_tlb_all()
> > > Linux hangs, I have
disabled IRQ's before using local_irq_disable(). I
> > > dont know why it hangs, Any pointers will be helpful.
> > Well, even if you do not use hypercall at this stage, you need consistent
> > vectors since some IRQs
> > previously configured by dom0 may occur (of course, if you disable IRQs,
> > should not have any problem).
> > That's why domU vectors are placed somewhere else in the memory.
> > Try to debug step-by-step the flush function to see what happens. It may
> > well be a freeze after the local_flush_tlb_all()
> > at a point where IRQs get re-enabled...
> > >
> > > 1 more question on devicemaps_init
> > > Why
> > > is the below commented on DOMu and not on DOM0, Does Xen
> > Hypervisor
> > > populate any of this area? if so what does it do and can
> > > point me to the code?
> > > #if 0
> > > for (addr = VMALLOC_END; addr < HYPERVISOR_VIRT_START; addr +=
> > > PGDIR_SIZE)
> > > pmd_clear(pmd_off_k(addr));
> > > #endif
> > Basically, you do not need any I/Os in domU since hardware access are
> > control of dom0.
> > Except for debug purposes, but it mainly concerns UART.
> > So we can leave the mapped I/O as such in domU without doing any I/O
> > mapping.
> > I hope it helps.
> > Regards
> > Daniel
> > >
> > > Warm Regards,
> > > Akshay
> > >
> > >
> > > ------------------------------------------------------------------------------
> Try New Relic Now & We'll Send You this Cool Shirt
> > > New Relic is the only SaaS-based application performance monitoring
> > service
> > > that delivers powerful full stack analytics. Optimize and monitor your
> > > browser, app, & servers with just a few lines of code. Try New Relic
> > > and get this awesome Nerd Life shirt!
> > http://p.sf.net/sfu/newrelic_d2d_apr
> > > _______________________________________________
> > > Embeddedxen-devel mailing list
> > > Embeddedxenfirstname.lastname@example.org
> > > https://lists.sourceforge.net/lists/listinfo/embeddedxen-devel
> > ------------------------------------------------------------------------------
> > Try New Relic Now & We'll Send You this Cool Shirt
> > New Relic is the only SaaS-based application performance monitoring
> > that delivers powerful full stack analytics. Optimize and monitor your
> > browser, app, & servers with just a few lines of code. Try New Relic
> > and get this awesome Nerd Life shirt!
> > _______________________________________________
> > Embeddedxen-devel mailing list
> > Embeddedxenemail@example.com
> > https://lists.sourceforge.net/lists/listinfo/embeddedxen-devel