* [Qemu-devel] qemu & kernel :address generated are non-uniform @ 2011-11-18 14:49 sparsh mittal 2011-11-21 2:13 ` Mulyadi Santosa 0 siblings, 1 reply; 3+ messages in thread From: sparsh mittal @ 2011-11-18 14:49 UTC (permalink / raw) To: qemu-devel [-- Attachment #1: Type: text/plain, Size: 926 bytes --] Hello I am using marss with qemu, but this question is related to qemu & kernel. When I use: qemu-system-x86_64 -m 4G myImage.img and print physical addresses that are passed to cache hierarchy, I see that the physical addresses are not in uniform range: for example: GBrange numberOfAddresses 0-0.5---> 3325 0.5-1---> 1253 1-1.5---> 0 1.5-2---> 30 2-2.5---> 0 2.5-3---> 1708 3-3.5---> 10521 3.5-4---> 0 4-4.5--> 15428 This phenomenon affects my work in following way: In Marss (cycle-accurate simulator for x86), these addresses are used to access cache hierarchy. If phys-addresses are in only in certain range, then, only few cache-set-locations will be accessed and others not. I am studying cache, and because of this phenomenon cache is arbitrarily used. Can I do something to make these addresses uniform? Since it is a kernel issue, I don't know what can be done? I would be grateful for any help. Sparsh [-- Attachment #2: Type: text/html, Size: 1124 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] qemu & kernel :address generated are non-uniform 2011-11-18 14:49 [Qemu-devel] qemu & kernel :address generated are non-uniform sparsh mittal @ 2011-11-21 2:13 ` Mulyadi Santosa 2011-11-21 23:00 ` sparsh mittal 0 siblings, 1 reply; 3+ messages in thread From: Mulyadi Santosa @ 2011-11-21 2:13 UTC (permalink / raw) To: sparsh mittal; +Cc: qemu-devel On Fri, Nov 18, 2011 at 21:49, sparsh mittal <sparsh0mittal@gmail.com> wrote: > GBrange numberOfAddresses > > 0-0.5---> 3325 > > 0.5-1---> 1253 > > 1-1.5---> 0 > > 1.5-2---> 30 > > 2-2.5---> 0 > > 2.5-3---> 1708 > > 3-3.5---> 10521 > > 3.5-4---> 0 > > 4-4.5--> 15428 Hi... I never observe the above address usage like you did, but I think that is expected. The reason is that Linux kernel tends to allocate from high memory (above 896 MiB ) to allocate pages, including their page tables. This is done to lower the "pressure" against normal memory zone. Now for the "unbalance" case, I guess that's due the high usage of slab. I am not sure where in fact they are started to be placed in RAM. One thing for sure is that they act as cache for frequest used objects such task structs, bio, socket buffers. So, as you can take a guess. It's a mechanism in Linux memory management which is quite complicated. Not sure if there's shortcut to shape this up. -- regards, Mulyadi Santosa Freelance Linux trainer and consultant blog: the-hydra.blogspot.com training: mulyaditraining.blogspot.com ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] qemu & kernel :address generated are non-uniform 2011-11-21 2:13 ` Mulyadi Santosa @ 2011-11-21 23:00 ` sparsh mittal 0 siblings, 0 replies; 3+ messages in thread From: sparsh mittal @ 2011-11-21 23:00 UTC (permalink / raw) To: Mulyadi Santosa; +Cc: qemu-devel [-- Attachment #1: Type: text/plain, Size: 2531 bytes --] Thanks for the answer. For right now, I am OK with this, since I realized that cache access is uniform (cache sets are calculated by modulo) and my work concerns with cache. Still, I am happy to know that it is expected thing and not unexpected. Also I have one very important question and I would be grateful if could give an answer: I am using Marss cycle-accurate simulator, which uses QEMU. It is a full system simulator and gives both user and kernel stats. However, even if I take only user-stats, the statistics vary a lot between two runs. To clarify it a bit: If I run a simulation with xyz configuration, once and then second time (without making change), then statistics differ. I was wondering if it has something to do with qemu. Or any qemu option, that can make simulation deterministic. I tried using -icount auto and still some variation is there. Is it true that the load on host machine affects qemu operation? My friend observed that if two simulations (with different configurations) are run in *parallel*, then the variation is more than if they were to be run in * series* (one after another). With simplescalar eio files, I have never observed any variation. I would be grateful for some help. Thanks and Regards Sparsh Mittal On Sun, Nov 20, 2011 at 8:13 PM, Mulyadi Santosa <mulyadi.santosa@gmail.com>wrote: > On Fri, Nov 18, 2011 at 21:49, sparsh mittal <sparsh0mittal@gmail.com> > wrote: > > GBrange numberOfAddresses > > > > 0-0.5---> 3325 > > > > 0.5-1---> 1253 > > > > 1-1.5---> 0 > > > > 1.5-2---> 30 > > > > 2-2.5---> 0 > > > > 2.5-3---> 1708 > > > > 3-3.5---> 10521 > > > > 3.5-4---> 0 > > > > 4-4.5--> 15428 > > Hi... > > I never observe the above address usage like you did, but I think that > is expected. > > The reason is that Linux kernel tends to allocate from high memory > (above 896 MiB ) to allocate pages, including their page tables. This > is done to lower the "pressure" against normal memory zone. > > Now for the "unbalance" case, I guess that's due the high usage of > slab. I am not sure where in fact they are started to be placed in > RAM. One thing for sure is that they act as cache for frequest used > objects such task structs, bio, socket buffers. > > So, as you can take a guess. It's a mechanism in Linux memory > management which is quite complicated. Not sure if there's shortcut to > shape this up. > > -- > regards, > > Mulyadi Santosa > Freelance Linux trainer and consultant > > blog: the-hydra.blogspot.com > training: mulyaditraining.blogspot.com > [-- Attachment #2: Type: text/html, Size: 3378 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-11-21 23:01 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-11-18 14:49 [Qemu-devel] qemu & kernel :address generated are non-uniform sparsh mittal 2011-11-21 2:13 ` Mulyadi Santosa 2011-11-21 23:00 ` sparsh mittal
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).