Back
[00:04:03] <jepler> http://article.gmane.org/gmane.linux.kernel/927886
[00:05:39] <micges> nice
[00:08:02] <SWPLinux> cool
[00:08:48] <alex_joni> interesting
[00:10:07] <SWPLinux> hmmm. I wonder how much longer this SVN checkout of gphoto2 will take on the airport wi-fi connection
[00:17:11] <jepler> I wonder if that's not the best time to check out software
[00:17:57] <alex_joni> why wouldn't it be?
[00:17:58] <SWPLinux> heh
[00:18:07] <SWPLinux> it finished though, so I guess it's not all bad
[00:20:48] <SWPLinux> see you later
[00:23:34] <jepler> have a good flight
[00:49:52] <skunworks> SWPadnos: those cases you have - will they take a full sized pci card?
[00:50:53] <skunworks> * full hight
[01:01:05] <micges> jepler: can you take a look at this patch:
http://www.pastebin.ca/1716497
[01:01:22] <micges> it speed up loading program with many dwells
[01:02:35] <micges> from 30 sec down to about 3 sec
[01:05:58] <micges> (to be specific glcanon.draw_dwells run time drop from 30 sec down to about 3 sec)
[01:10:02] <jepler> in general I think that kind of change is OK particularly if the payoff is big. Just curious, by comparison how much does this trim the load time:
http://emergent.unpy.net/files/sandbox/glcanon.patch
[01:10:18] <jepler> also, roughly how many dwells in the file that takes 30 seconds to load on unmodified emc?
[01:11:14] <micges> 158000
[01:13:49] <micges> this is test file but normally we have many dwells too (laser cutter)
[01:15:41] <jepler> huh. a file with 100000 G1 moves and 100000 G4P1 dwells loads in about 7 seconds for me with the glcanon.patch I just proposed, or 8 seconds without
[01:18:28] <micges> jepler: here I have 10% speed up
[01:19:33] <micges> this is my test file:
http://filebin.ca/vxnovd/1.ngc
[01:21:42] <jepler> it's slow to download, so I stopped it. my test program is very simple.
http://pastebin.ca/1716519
[01:21:59] <jepler> in fact you could probably change it to an O-word loop and get about the same test
[01:23:34] <jepler> what kind of system are you running on? this timing is on 9.10, sim, core 2 duo, forced to 800MHz
[01:24:21] <jepler> intel accelerated opengl, but since you've started by establishing that 90% of the time is taken interpreting Python that shouldn't be relevant.
[01:24:26] <micges> 9.04 sim, celeron 2.8Ghz
[01:25:12] <micges> I'm just trying to aviod bottlenecks on loading gcode
[01:26:06] <micges> next is in gcode.parse()
[01:27:26] <micges> jepler: I'll test glcanon patch deeply at work tommorow, if all tests will be correct and speed up will be visible I'll commit it
[01:28:01] <jepler> the proposed change looks OK. It's just odd that your machine is so much slower than mine -- you say 30 seconds for one phase of the load for a file 1.5x as big as mine, I say 8 seconds for the whole load...
[01:28:22] <jepler> now, our test programs aren't the same, so that could be the reason. but it should boil down to the number of dwells being drawn..
[01:30:09] <micges> probably something is messed up here, python programs are REALLY slow
[01:33:35] <jepler> error: patch failed: lib/python/rs274/glcanon.py:66
[01:33:35] <jepler> error: lib/python/rs274/glcanon.py: patch does not apply
[01:33:35] <jepler> error: patch failed: src/emc/usr_intf/axis/extensions/emcmodule.cc:1680
[01:33:35] <jepler> error: src/emc/usr_intf/axis/extensions/emcmodule.cc: patch does not apply
[01:33:41] <jepler> patch doesn't apply here anyway
[01:33:57] <jepler> oh, it's pastebin fucking up whitespace
[01:33:59] <jepler> sigh
[01:34:50] <jepler> with your patch loading my file still takes around 7 to 8 seconds
[01:36:00] <micges> jepler: ok I'll give it test on real rt machine tommorow
[01:41:49] <jepler> what's pystone performance on your system?
[01:41:49] <jepler> $ python -mtest.pystone
[01:41:50] <jepler> Pystone(1.1) time for 50000 passes = 0.68
[01:41:50] <jepler> This machine benchmarks at 73529.4 pystones/second
[01:43:16] <micges> Pystone(1.1) time for 50000 passes = 1.75
[01:52:11] <jepler> that's not at all fast.
[01:53:20] <micges> yep
[01:54:56] <micges> eh 3am, good night all
[03:10:53] <steve_stallings> Does the "IBM Real-Time "SMI Free" mode drive" linked by Jepler have any relevance to trying to run RTAI on IBM Thinkpad laptops?
[03:15:13] <steve_stallings> See -
http://article.gmane.org/gmane.linux.kernel/927886
[03:25:57] <cradek> seems like it might be relevant, but I sure don't know for sure
[03:30:47] <steve_stallings> well, thanks, guess I will keep it in the back of my mind since it will not really matter to me until after Ubuntu 10.4 LTS gets EMC
[03:31:32] <cradek> I am really looking forward to 10.4 because it'll be spring then
[03:32:12] <steve_stallings> never did get the 8.04 EMC release to co-operate with the built-in Ethernet on my T400
[03:33:49] <steve_stallings> have not heard anything lately about CNC Workshop and the prospects of having EMC fest there, anything up?
[03:34:07] <cradek> I think some of us are planning to go
[03:34:29] <cradek> I don't know if we'll do anything particularly special - for instance I don't think there will be any interesting machines to work on
[03:35:24] <steve_stallings> does it look like we will have the time frame and facility access that we are accustomed to having?
[03:36:11] <cradek> sorry what do you mean?
[03:36:30] <steve_stallings> ability to wander in and out at all hours and stay all week
[03:36:48] <cradek> oh I see - I would think so
[03:38:16] <steve_stallings> so... what would be an interesting project machine? lathe:done, rotary axis:done, rigid tap:done, ah.... "spice rack" style tool changer?
[03:39:40] <cradek> not long ago I'd have said a random toolchanger - but that's done
[03:39:54] <steve_stallings> ... or maybe the upgrades to kinematics that are in process
[03:40:24] <cradek> dunno about spice rack. I think it's trivially easy to hack in, but would only work for spice rack
[03:40:34] <cradek> we probably want something more general - but what?
[03:41:46] <steve_stallings> well, I was thinking of a more generalized solution to motion from "macros" to do the spice rack, not a pure hack
[03:42:17] <cradek> yeah
[03:42:43] <cradek> huge project
[03:43:20] <steve_stallings> that is only bacause you have done such a good job on all the "small" projects
[03:43:20] <cradek> rfl/task redesign is another huge one that we could argue about for a while and then put off for another year
[03:43:48] <cradek> yeah the low-hanging fruit and also most a bit further up are all picked
[03:43:51] <steve_stallings> rfl?
[03:44:25] <cradek> run from line (i.e. redesign all the modal stuff about program starting/stopping)
[03:44:30] <steve_stallings> ah
[03:53:05] <SWPadnos> skunworks, yes, they fit a 5i22-sized card just fine, but there can be some interference between the cables and the hard disk
[03:53:48] <SWPadnos> the (laptop-sized) hard disk(s) mount under the top of the case, and there isn't a lot of room there
[03:59:24] <Dave911_> Dave911_ is now known as Dave911
[04:16:46] <steve_stallings> steve_stallings is now known as steves_logging
[14:54:07] <jepler> I have a work machine (ubuntu 6.06) with simulator. I tried loading the same file with moves and dwells as I did last night. total load time was higher, about 24 seconds. pystone 30120.5/second (50000 passes = 1.66)
[14:56:33] <jepler> micges patch reduced it to 20
[14:58:17] <jepler> this machine is a Pentium 4 2.40GHz (family 15 model 2 stepping 7)
[16:24:29] <christel> [Global Notice] Hi all, Sorry for the delay in updating you -- wanted to get confirmation from sponsors about the actual issue, we are indeed still experiencing heavy DDoS and our upstream providers are working to curb it at the borders were possible. Further info will be via wallops. Thank you.
[17:06:54] <christel> [Server Notice] Hi all, the server you are connected to (orwell) will be going down shortly, you may wish to connect to a different server by connecting to chat.freenode.net