Back
[00:01:38] <mozmck> anyone know if it is a bad idea to use PREEMPT in the configuration for a realtime kernel?
[00:29:50] <jepler> mozmck: it's not going to help anything about rtai performance and has apparently been related to some bugs in ipipe which I don't understand much about
http://mid.gmane.org/[email protected]
[00:31:09] <mozmck> I'm just curious. I enabled it in the last few kernels I've compiled, and it works fine on several computers, but I have one with an Intel 845G Brookdale chipset that won't boot.
[00:31:36] <mozmck> I may try again without PREEMPT and see if that helps.
[13:09:52] <micges_work1> micges_work1 is now known as micges
[13:32:52] <alex_joni> micges: you got mail
[16:43:15] <mshaver> jepler et. al. - Is there any information on using the AXIS preview window by itself? I imagine somehow instantiating a preview window, sending it a tool table and var file, then feeding it g-code to plot. I had heard (don't remember where) that some work was done towards making the preview component independent of AXIS, but didn't see anything on the wiki. I'd like to be able to use a program like Dewey Garrett's ngcgui in standalon
[16:45:40] <cradek> mshaver: not an exact answer to your question, but you could run full AXIS and use axis-remote to cause it to load or reload the program
[16:50:22] <cradek> but a built-in preview for ngcgui would be pretty cool...
[16:50:56] <mshaver> I thought Jeff made the preview component separable?
[16:51:35] <mshaver> I wanted to be able to do this in a non-realtime equipped system. Maybe even a non-Linux one...
[16:51:49] <cradek> you don't need realtime to run emc
[16:51:58] <cradek> (not sure about non-linux)
[16:52:12] <cradek> but yeah jeff did some work with that but ran out of steam before integrating it
[16:52:47] <mshaver> Ah, OK. I was just wondering where that had ended up.
[16:53:23] <jepler> it's branch glrefactor at
http://git.unpy.net/view?p=emc2-jepler.git;a=summary
[16:53:40] <jepler> the sample program that uses the refactored code still expects emc to be running, because it does backplot
[16:54:01] <jepler> but maybe it's easier to hack that out of the new gui, "gremlin", than out of axis
[16:55:22] <mshaver> Cool! Something to look at! Gremlin you say?
[16:56:13] <jepler> gremlin embeds the 3d view of axis in a pygtk application
[16:56:40] <jepler> cd
[16:56:41] <jepler> argh
[16:57:41] <mshaver> I'll 'pull' this tonight & have a look! Thanks Jeff.
[16:58:57] <jepler> best of luck
[17:02:28] <jepler> maybe I should merge it while there's still time
[17:02:45] <jepler> but, ugh, I don't want to deal with whatever breakage it'll cause
[17:08:49] <mozmck_work> mshaver: did you ever get my karmic packages to work?
[17:08:49] <cradek> I think an offline preview (no backplot, program specified on the command line) would be super cool
[17:15:56] <jepler> but then you need e.g., a tool file parser since you can't obtain tool information via nml
[17:17:40] <cradek> for diameters?
[17:17:52] <jepler> lathe tool orientation is another thing you need
[17:18:00] <cradek> hm.
[17:18:30] <cradek> I sure want to say I don't care about that, but I think that's not true
[17:18:50] <jepler> I'm pretty sure you want to be able to look at the programmed or offset path
[17:19:24] <jepler> ls
[17:19:27] <jepler> argh argh argh
[17:19:56] <jepler> anyway, glrefactor merges cleanly with master
[17:20:26] <cradek> 9 days...
[17:23:55] <alex_joni> planning a trip?
[17:24:06] <cradek> feature freeze
[17:24:13] <alex_joni> ah
[17:33:33] <jepler> of course I'm the boss so the deadline doesn't apply to me :-/
[17:33:44] <jepler> maybe my personal deadline is 9 days earlier
[17:39:51] <CIA-5> EMC: 03jepler 07master * r8e9f2902884f 10/src/emc/usr_intf/axis/extensions/minigl.c: add opengl functions related to glDrawPixels
[17:42:47] <jepler> .. that means the only changes in glrefactor are in .py files, so that someone who wants to use the refactored stuff from the branch can use a copy of those files and not disturb axis from working .. just have to set python's sys.path right
[18:19:35] <alex_joni> micges: hi
[18:19:49] <micges> hi
[18:20:00] <micges> I've got mail
[18:20:16] <alex_joni> cool.. pm
[18:40:55] <mshaver> mozmck_work: no, but I forgot what doesn't work. I've got stuff to do today/tonight, but I'll check it out soon. Thanks!
[20:06:00] <mozmck_work> hmm, does the kernel timer frequency make any difference for realtime stuff? looks like the default is 250hz, but I'm thinking of using 100 hz if that could help with latency
[20:07:04] <SWPadnos> shouldn't
[20:07:13] <SWPadnos> higher numbers may use more CPU time
[20:07:21] <SWPadnos> but other than that, it shouldn't really matter
[20:08:46] <mozmck_work> ok. I was thinking higher numbers could possibly increase jitter
[20:12:48] <jepler> if the linux kernel tick causes bad latency changing the frequency of that won't reduce max latency anyway. it would just modify the frequency of the bad latency.
[20:18:19] <mozmck_work> ah, I see.
[20:23:39] <jepler> but if rtai is working right then the linux kernel tick is blocked when realtime tasks need to run, so it shouldn't be capable of causing latency anyway
[20:25:07] <jepler> closest I can find to an answer to this question is that when it was asked it went unanswered by Paolo:
http://mid.gmane.org/[email protected] ("could it be due to config_hz?" it says in the quoted portion)
[20:30:01] <mozmck_work> Thanks. I'm just looking through options trying to learn more. Who does the kernel for for the official emc releases?
[20:31:57] <jepler> a volunteer
[20:32:05] <jepler> whoever we can goad into doing it
[20:32:39] <mozmck_work> oh :) I'm working with 2.6.32.2 right now, and ubuntu 10.04 will use 2.6.32.x
[20:32:39] <cradek> the poor sucker
[20:32:40] <alex_joni> I plan at trying for 10.04
[20:32:47] <mozmck_work> maybe I can help
[20:32:56] <alex_joni> mozmck_work: surely
[20:33:30] <mozmck_work> I managed to get the stock 9.10 kernel patched and working on a number of computers.
[20:33:49] <cradek> mozmck_work: did you make deb packages?
[20:34:12] <mozmck_work> cradek: that's what is under karmic on experimental
[20:34:22] <cradek> neat.
[20:34:47] <mozmck_work> it is smp, but I've run it is single core computers.
[20:34:56] <jepler> mozmck_work: great. next we need you to learn how to respin the live cds
[20:35:05] <mozmck_work> I also built live cd.
[20:35:17] <jepler> great. you're elected to do it officially from now on
[20:35:18] <mozmck_work> a
[20:35:21] <alex_joni> see.. job done
[20:35:23] <mozmck_work> uhoh
[20:36:14] <jepler> fwiw the kernel of the moment in the lucid alpha2 is called 2.6.32-10-generic
[20:36:38] <mozmck_work> :) one question I have is, has anyone used the ubuntu build system for making an rtai patched kernel?
[20:36:51] <cradek> you mean make-kpkg?
[20:37:20] <alex_joni> cradek: make-kpkg is so 6.06 ;)
[20:37:38] <alex_joni> mozmck_work: that's what we used for 8.04
[20:37:51] <mozmck_work> cradek: no, they have a debian directory in the git tree
[20:38:41] <mozmck_work> but they split up the config file and I can't get a good answer on how to make changes except with a text editor looking through a bunch of files.
[20:39:31] <cradek> oh, obviously I have no idea
[20:39:36] <cradek> I last touched it on dapper
[20:39:52] <mozmck_work> jepler: the -10 I don't think means much. I believe they leave off the last .2 or whatever and put their own revision with a -
[20:40:33] <mozmck_work> the new system looks much harder to use for making many changes to me, but I may have just not asked the right people yet.
[20:41:26] <mozmck_work> I have asked several times on #ubuntu-kernel, and got different answers. but they say it's best not to use make-kpkg anymore because it is not maintained for ubuntu...
[20:42:00] <jepler> alex and I did 8.04 in collaboration. at that time what we ended up with was a full pair of config files in debian/binary-custom.d/rtai/config.{i386,amd64}
[20:42:17] <jepler> and built the kernel with +DEB_BUILD_OPTIONS=parallel=2 NOEXTRAS=1 fakeroot debian/rules custom-binary-rtai
[20:43:46] <mozmck_work> hmm, so you used the ubuntu system but made complete .config files and told it to use them?
[20:44:43] <mozmck_work> did you use a vanilla kernel or the ubuntu one?
[20:44:45] <jepler> they have some provision for "custom binaries", like they use for the -rt kernel
[20:44:53] <jepler> the ubuntu kernel
[20:45:41] <mozmck_work> I see. that's what I did. I think the build system has changed some since then, but I need to look at it some more.
[20:47:57] <jepler> here are all the changes we made compared to the ubuntu kernel on hardy:
http://emergent.unpy.net/files/sandbox/linuxcnc-hardy-kernel-patches.mbox
[20:53:13] <mozmck_work> thanks, I'll look at that
[20:54:12] <alex_joni> mozmck_work: iirc I started with vanilla sources, and did the config
[20:54:19] <mozmck_work> I need to get a quad core system to compile kernels... this takes forever.
[20:54:33] <alex_joni> then got the ubuntu sources and put the config like jeff explained
[20:54:44] <alex_joni> install ccache
[20:54:59] <cradek> yeah, if you think you'll need to do it more than once (and you will!), install ccache **first**
[20:55:04] <jepler> I think mozmck_work should get both a quad core system and ccache
[20:55:09] <mozmck_work> alex_joni: ok. I've done both now. I used the ubuntu config on the vanilla kernel.
[20:55:15] <jepler> not only install it, but make sure it's working!
[20:55:25] <mozmck_work> I did install ccache, and couldn't tell it helped.
[20:55:26] <alex_joni> * alex_joni seconds that
[20:55:35] <cradek> check ccache -s
[20:55:40] <cradek> probably not actually using it
[20:55:40] <mozmck_work> ok
[20:56:18] <mozmck_work> I made gcc, g++ and cpp links to ccache and made sure they are what are called when you type gcc etc
[20:57:19] <mozmck_work> my dual core machine at home is faster but it still takes 40 minutes to compile a kernel.
[21:02:40] <jepler> is that building all the variations of the kernel, or directly building the custom variant you want (debian/rules custom-binary-rtai in the case of the old hardy packages)
[21:02:44] <jepler> ?
[21:03:54] <mozmck_work> you mean the time? that's just building using make-kpkg
[21:04:05] <mozmck_work> it may have been a little longer than that
[21:04:32] <alex_joni> debian/rules foo builds all variants and flavours and archs and whatnot
[21:04:34] <jepler> I think that'll only be one kernel variant, then.
[21:04:41] <alex_joni> so that takes a couple hours on my machine
[21:05:01] <alex_joni> a single variant took less than 30 minutes iirc
[21:05:37] <jepler> mozmck_work: do you make sure it's a parallel build e.g., with DEB_BUILD_OPTIONS=parallel=2?
[21:05:46] <jepler> I forget what NOEXTRAS is
[21:07:23] <mozmck_work> hmm, I haven't done the debian/rules stuff yet because I couldn't figure out the configuration.
[21:07:54] <mozmck_work> but with make-kpkg there was an export to use both cpus that I did.
[21:08:04] <jepler> ok
[21:08:20] <mozmck_work> I'll have to look at it again, it's been a little while...
[21:09:38] <jepler> * jepler decides he should time the hardy build on his 4-core system
[21:09:44] <jepler> just in case it gets me any bragging rights
[21:10:54] <alex_joni> heh
[21:10:55] <mozmck_work> export CONCURRENCY_LEVEL=3
[21:11:14] <mozmck_work> jepler: do you have intel or amd?
[21:11:28] <jepler> AMD Phenom 9600
[21:11:38] <jepler> running stock ubuntu amd64 kernel
[21:12:13] <alex_joni> now where's swp hiding when it's about bragging
[21:12:29] <alex_joni> and cores
[21:21:24] <alex_joni> * alex_joni hears the fan going off in jepler's phenom
[22:06:07] <jepler> doh, it failed :( after 20 minutes :(
[22:06:08] <jepler> dh_testroot: You must run this as root (or use fakeroot).
[22:06:08] <jepler> make: *** [custom-install-rtai] Error 1
[22:06:08] <jepler> real 19m32.788s
[22:06:45] <jepler> if I had a dollar for everytime a debian package told me that after doing substantial work it would have to repeat once I fakerooted....!
[22:07:03] <mozmck_work> no fun
[22:09:27] <jepler> ah, cool -- it didn't repeat a lot of work after that.
[22:09:27] <jepler> real 2m57.965s
[22:14:31] <mozmck_work> not bad. I need to figure out what's the best motherboard for that cpu
[22:15:48] <jepler> I dunno how this system's performance is with rtai, it's web/mail/development.
[22:16:25] <mozmck_work> my dual core amd 5800 is great with rtai, and I haven't even tried it with isolcpus yet
[22:17:03] <jepler> the motherboard is TA790GX XE but I can't wholeheartedly recommend it:
[22:17:32] <jepler> I recently built a number of systens with that motherboard, but it was incompatible with the RAM and CPU I chose until I did a BIOS upgrade
[22:17:51] <mozmck_work> I've run it mostly with different rtai kernels for a year now with no stability problems either, doing everything under the sun (nearly)
[22:17:52] <jepler> after upgrade those systems are working OK so far (2 weeks in service)
[22:18:20] <mozmck_work> hmm, I went with a gigabyte board for mine, and it has worked quite well.
[22:23:32] <jepler> I wonder how often people want both velocity and acceleration out of ddt
[22:23:41] <jepler> it's easy enough to make ddt compute both
[22:27:47] <jepler> it would have also given markedly better results back when hal floats were 32 bits wide; it doesn't make much difference now though
[22:30:07] <jepler> (actually I can't find any difference; I expected at least slightly different roundoff between ddt->ddt and direct estimation of acceleration)
[22:33:02] <jepler> second run, with the ccache nice and full:
[22:33:02] <jepler> real 16m39.474s
[22:33:11] <jepler> cache hit 6195
[22:33:11] <jepler> cache miss 21
[22:42:13] <mozmck_work> nice. my compile here on a single core 2.4ghz machine took 137m
[22:42:35] <jepler> of course we're comparing different kernels as well
[22:42:45] <jepler> there are bound to be a lot more drivers in your two-years-newer kernel, for instance
[22:42:59] <mozmck_work> yeah, and I'm running make-kpkg which may do differently
[22:43:17] <mozmck_work> ccache -s says cache hit 24
[22:43:28] <mozmck_work> cache miss 18532
[22:43:46] <mozmck_work> cache size 105.1 Mbytes
[22:44:07] <mozmck_work> so it is getting used a little
[22:45:56] <jepler> that is surprisingly low, about .2%
[22:46:21] <jepler> by contrast, my cache was 99.6% effective
[22:46:30] <jepler> .. on the second build
[22:47:07] <mozmck_work> I did a build just a day or two ago. This build I ran make-kpkg clean first because all I had changed was a couple of config settings
[22:47:50] <jepler> I made zero changes
[22:49:15] <jepler> that's obviously the ideal case
[23:09:20] <dgarr> would it be useful to warn builder when memlock missing:
http://www.panix.com/~dgarrett/stuff/0001-Test-for-memlock-line-in-etc-security-limits.conf.patch
[23:14:19] <cradek> you're only using the variable by name in one out of two places
[23:14:50] <cradek> (I don't know whether it's a good idea - I don't know how standard that file is or on how many various systems this is needed)
[23:15:26] <dgarr> it seems like some stumble on to the problem and it's hard to figure out the reason
[23:15:44] <cradek> yeah it's a pain
[23:16:45] <dgarr> i fixed the var
[23:16:58] <cradek> what does this even do? what is the 20MB?
[23:17:29] <cradek> hal pins? (seems like a LOT of it)
[23:18:16] <dgarr> different subject: for consideration:
http://www.panix.com/~dgarrett/stuff/0001-Add-min-max-windup-limits-for-integrator.patch
[23:18:35] <jepler> cradek: any "realtime" memory region mapped in userspace counts against the user's locked memory limit
[23:18:46] <jepler> so that is the hal shared memory area, the halscope data, and maybe other stuff
[23:18:54] <jepler> (I don't think nml buffers are locked but I could be wrong)
[23:18:57] <cradek> ah
[23:19:48] <jepler> we can do a runtime test:
http://pastebin.ca/1760891
[23:22:34] <mozmck_work> that sound like a good feature for 2.4
[23:24:26] <CIA-5> EMC: 03cradek 07master * r330ba34f6707 10/src/hal/components/integ.comp: Add min,max windup limits for integrator
[23:24:53] <jepler> dgarr: that second patch is tested? It looks pretty straightforward
[23:25:02] <cradek> I like jepler's solution to memlock better
[23:25:24] <cradek> jepler: looked fine and I pushed it already (guess I assumed it was tested)
[23:25:29] <jepler> cradek: ah, ok
[23:25:30] <jepler> thanks
[23:25:48] <cradek> I was going to worry about the initial limits being less than the actual full range, but 1e20 is big...
[23:25:58] <cradek> you can't possibly want to integrate up to that
[23:26:50] <jepler> if you're integrating numbers of atoms you might
[23:27:00] <jepler> but yeah, for sane users...
[23:27:51] <dgarr> jepler: tested: yes
[23:33:22] <jepler> http://pastebin.ca/1760908
[23:35:56] <dgarr> man -s 5 limits.conf might be useful for the url ref
[23:37:26] <jepler> huh, I don't have a limits.conf(5)
[23:38:46] <dgarr> dpkg -S says libpam-modules
[23:39:28] <jepler> I have it on an 8.04 system but not a 6.06 system
[23:51:54] <alex_joni> on 6.06 there was no memlock issue
[23:52:15] <alex_joni> they introduced/strengthtened it later
[23:52:32] <CIA-5> EMC: 03jepler 07master * re11e63f6ced1 10/src/rtapi/rtai_ulapi.c: improve message when user might be hitting memory lock limit
[23:52:35] <alex_joni> * alex_joni should head to bed
[23:53:01] <alex_joni> g'night all
[23:53:16] <alex_joni> ah, remembered something
[23:53:28] <jepler> anyone: feel free to give the page
http://wiki.linuxcnc.org/cgi-bin/emcinfo.pl?LockedMemory a little tlc .. I just wanted to get something written down there
[23:53:38] <alex_joni> mozmck, jepler: I was planning to investigate using a PPA for building the next kernel packages
[23:54:01] <mozmck> what does the PPA do for you?
[23:54:07] <alex_joni> build the packages
[23:54:30] <alex_joni> they also sign and host them, but we don't care for that
[23:54:35] <jepler> alex_joni: we can talk about it later, but my first inclination is that linuxcnc.org is the established location for emc2 and its dependencies
[23:54:51] <alex_joni> jepler: and I don't plan to change that
[23:54:58] <jepler> if we can get the benefits of launchpad PPAs *and* continue hosting on linuxcnc.org then it's fine
[23:55:09] <alex_joni> just wget the deb's once they are built
[23:55:10] <jepler> anyway, bbl
[23:55:13] <alex_joni> sign and push to DH
[23:55:18] <alex_joni> yeah, me too
[23:55:46] <jepler> dgarr: thanks for the impetus to put in this check
[23:55:53] <alex_joni> mozmck:
https://launchpad.net/ubuntu/+ppas