Tampilkan postingan dengan label perl. Tampilkan semua postingan
Tampilkan postingan dengan label perl. Tampilkan semua postingan

Making Catalyst Debug Logs Really Be Quiet

I have recently been adding and updating tests to my biggest Catalyst project and have been a bit perplexed by the debugging output...in particular that I was seeing any of it! Generally, I like to see all that output scroll by, but when running Test::WWW::Mechanize::Catalyst tests over and over again, it just clutters things and obfuscates any failures.



I had removed -Debug from the plugin list and tried CATALYST_DEBUG=0 env variable, but I continued to see a lot of the debug messages. After a bit of googling, I finally learned that this was a feature.



The -Debug flag and CATALYST_DEBUG env variable are just for the internal Catalyst debug logs. What I needed to do was to set the log levels with MyApp->log->levels to control what is dumped with $c->log->debug and its brethren. In general, I want my custom debugging and the internal Catalyst debugging to be tied together, so I added the following to lib/MyApp.pm after __PACKAGE__->setup:




__PACKAGE__->log->levels( qw/info warn error fatal/ ) unless __PACKAGE__->debug;


Now, if I run the server with -d or do something like "CATALYST_DEBUG=1 prove -l t" I see all the usual log message, otherwise I get nice clean test output.

Moving to a Mac... which Perl?

We'll after neglecting this blog for quite some time, I'm now back. I had to swap my laptop during the summer, and I decided to give one of the MacBook Pros a try. So I'll be adding Perl on the Mac and the Mac in general to the topics covered here. My first dilemma with the new Mac was which perl to use.




  • Leopard only had 5.8 installed, and I've been hooked on 5.10 for a while now. (Snow Leopard has added 5.10, but by the time I got the upgrade I was commited to the ideal of keeping the system perl separate from my development perl.)

  • Having come from Arch Linux, I stumbled upon and really liked Arch OS/X. Unfortunately, it appears that it isn't as well tested as MacPorts. In order to build any Perl modules that us XS with the Arch OS/X perl, I needed to use:


    $ perl Makefile.PL \
    LDDLFLAGS="-arch x86_64 -arch i386 -arch ppc \
    -bundle -undefined dynamic_lookup -L/usr/local/lib" \
    LDFLAGS="-arch x86_64 -arch i386 -arch ppc -L/usr/local/lib" \
    CCFLAGS="-arch x86_64 -arch i386 -arch ppc -g -pipe \
    -fno-common -DPERL_DARWIN -fno-strict-aliasing \
    -I/usr/local/include -I." \
    OPTIMIZE="-Os"


    Ummm... I don't think so! While I created an bash alias for it, cpan/cpanp where requiring constant tweaks. I assume I could have exported those variable from my bashrc, but I would rather avoid global changes like that.

  • Next I tried compiling my own perl. I ended up doing it several times as I learned where to put it, and realized I had forgotten to enable things like threads. This really seems to be the best way to go, but I would rather someone else keep up with security patches, new versions, etc.

  • So finally I tried MacPorts. So far so good. I have had trouble remembering to check the variants (port variants <port-file>), but otherwise thumbs up.



One thing I realized that I want, is a record of all the ports that I have installed (not a list of all the installed ports, just those that I had purposely installed). So, I wrote a short bash script that I stuck in ~/bin/port to keep a log:



#!/bin/bash

case "$1" in
install|uninstall|upgrade|activate )
echo "`date` $@" >> ~/.macports.log
;;
*)
esac

/opt/local/bin/port $@

Now anytime I run port install perl5.10 +shared +threads it is added to a log file. Rebuilding the system should be a snap. (I'm sure I could have gotten this by grepping for sudo and port install from the /var/log/system.log* files, but I like having it all in one place and not worrying about log files being rotated out.)



One other tweak I need to make, was for CPANPLUS. I wanted to be able to install modules in either the system perl (by running /usr/bin/cpanp) or the MacPort perl (/opt/local/bin/cpanp), but both of those read my user config file (~/.cpanplus/lib/CPANPLUS/Config/User.pm) which need a full path for perlwrapper => '/usr/bin/cpanp-run-perl'. So I moved just that part of the config to the system config file by runnning the following in each cpanp:




$ s save system
$ s edit system


Then removing everything but the perlwrapper configuration. And finally taking the perlwrapper configuration out of my User.pm file. One other thing I needed to do to make 5.10 the default perl. MacPort defaults to perl5.8, but the following took care of that:




$ cd /opt/local/bin
$ sudo mv perl perl.bak
$ sudo cp perl-5.10 perl
# make cpanp -> cpanp-5.10, etc.
$ for i in *-5.10 ; do x=${i%%-5.10} ; sudo mv $x $x-5.8 ; sudo ln -s $i $x ; done


I see Python has a python_select port-file. Maybe we need something like that for Perl.

Stealing from Padre for Vim part 3

As promised in my last post, I have released a new version of App::EditorTools and have a number of screenshots of the new functionality. This version includes App::EditorTools::Vim, which provides an easy way to add the vim scripts to integrate the package into Vim.




perl -MApp::EditorTools::Vim -e run > ~/.vim/ftplugin/perl/editortools.vim


And now you should have the following mappings:




  • ,pp - Show a menu of the functions availabe from App::EditorTools

  • ,pL - Lexically rename the variable under the cursor (make sure the cursor is at the start of the variable for now

  • ,pP - Rename the package based on the path of the current buffer

  • ,pI - Introduce a temporary variable


Here are a few screenshots of these actions:



Lexically Rename Variables:

vim-renamevar


Rename Package based on the current file's path:

vim-renamepackage


Introduce Temporary Variable

vim-introtempvar



I'd love to hear feedback and any suggestions for future PPI based tools that Vim, Padre and other editors could leverage.



I also have to note that the editortools-vim script from App::EditorTools was very easy to put together (although it still needs a fair amount of clean-up work and documentation) since is was based on Ricardo's App::Cmd--very easy to use and feature rich.



** Updated 7/5/09 10:37am: moved the screencasts to an external server and fixed some spelling

More theft from Padre

I was pleasantly surprise at the positive response to my last post on Stealing from Padre for Vim--particularly from the Padre developers! Seems they had hoped/planned on separating some of the tools out of the Padre core from the beginning (and many seem to be vimmers).



With their blessing and encouragement, I have pulled the editor independent parts of their PPI::Task tools into its own distribution--PPIx::EditorTools--available now on CPAN. I also adapted the current version of Padre to work wit the external package and released App::EditorTools to provide a command line interface for those editors that need it (i.e., vim). I'll post another screencast and the Vim scripts needed to integrate it shortly.



As a result, I'm depreciating App::LexVarRepl--which was only ever available on github.



Sorry for the short post which is light on links and code, but we are in the process of moving so I'm dedicating the few moments I have to coding this rather than blogging about it...for now at least!

Stealing from Padre for Vim

I'm sure the Padre developers weren't hoping to have their code absconded for those of us addicted to vim, but tsee's recent blog post on refactoring with Padre's lexical variable replace made me jealous--I want that for vim! So hack, hack, hack and voila:



screenshot

This really is just leveraging the Padre code. At this point I actually use Padre::PPI but that has the downside of requiring Wx (which I personally like but it is quite a requirement). I only added a bit of code to make this into its own package and included some hints on vim scripting. The idea is to show how this could be abstracted into a standalone module then Padre, vim and any other reasonably powerful editor could use it. For now, I have packaged it as App::LexVarReplace with App::LexVarReplace::Vim pod. I would appreciate suggestions on the package layout and name, and any feedback from the Padre guys would be great. The git repository is available for your perusal. I must say that Padre seems very cool. I continue to check it out every once in a while, but I just can't seem to give up vim, gnu screen and a good old xterm. The developers have really done a great job leveraging modern Perl tools. You should check it out! I am only publishing this on github for now. I would like to speak with tsee or one of the Padre developers before this makes it debut on CPAN, but I'm a bit short on time at the moment.

Web testing

I have been stuck doing a lot of front-end web work lately and haven't had a chance to do much perl coding (I am planning on releasing an Mason based renderer for Email::MIME::Kit soon though). I'm very impressed with the power of CSS in modern browsers. The last time I looked at it, browser incompatibilities really made it difficult to use. It is much better still, I find most of the work to be trial and error, so I have a couple of tips that have saved me some time going back and forth...



Quick Browser Refresh


I do all of my coding in vim and have ton of mappings (maybe I'll share some in future). One I really like right now it ,u which saves the current file then runs my xrefresh perl script which finds my Firefox window and refreshes the current page. It is a simple script that is based on X11::GUITest. It works particularly well when I am using two monitors and can have the browser on in one of them. Here is the mapping, along with the ,U mapping which saves all files:



nmap <silent> ,u :w<cr>:! xrefresh 'Gran Paradiso'<cr>
nmap <silent> ,U :wa<cr>:! xrefresh 'Gran Paradiso'<cr>

It would probably be easy to extend this script to do things like refresh multiple browsers, refresh without using the cache, etc.



IE Testing with VBox


Next up is some fun with IE. (Oh, the hours wasted on you. Damn you IE!) Microsoft has actually been very helpful and provided disk images that you can use to test web pages in a virtual machine on different versions of their browser. They expire periodically and they are designed for Microsoft Virtual PC so it take a bit of work to set them up on linux using VirtualBox, so I created a bash script (create-ie-vbox) to do the heavy lifting.



Once you download the images run create-ie-vbox <path-to-image-.exe> and wait a while. The script uses perl and File::Spec to find the absolute path to the .exe file


exe_file=`echo $exe_file | perl -MFile::Spec -e'print File::Spec->rel2abs(<>)'`

then the unrar command (and perl) to find the .vhd file and unpack it

vhd_file=`unrar l $exe_file | perl -ne'/(\w[\s\w-]*\.vhd)/ && print $1'`
nice unrar e "$exe_file" "$vhd_file" >> $log 2>&1 \
|| die "Couldn't extract $vhd_file from $exe_file"

Then the big trick is to convert the .vhd file to a .vdi file. While VBox can work with .vhd files, Microsoft uses the same UUID for all of these disk images which VBox doesn't like. There is a command in VBoxManage to change the UUID(VBoxManage internalcommands setvdiuuid "$vhd") but there is an acknowledged bug that prevents it from being useful here. Instead we have to use qemu to convert it to a raw disk and then VBoxManage to convert that to a .vdi:

nice qemu-img convert -O raw -f vpc "$vhd" "$raw" || die "Error converting to raw"
nice VBoxManage convertdd "$raw" "$vdi" || die "Error converting to vdi"

Finally there are a number of VBoxManage commands to create and register the image and disk.



Once this is done, there is a bit more work that needs to be done within the vm to deal with some bugs and annoyances. The script prints out the remaining steps. Much thanks to George Ornbo who pointed the way on his blog. Note that this script only works on the XP disks at this point; I haven't bothered to test the Vista versions yet.



BTW, most of my bash scripts use the die() function (stolen from perl) which is very simple but handy:


function die() {
echo $@
exit 1
}



Browser Resize


Lastly, I like to check my pages in browsers at various widths. Most importantly (or is that annoyingly) 800px. Rather than change my screen resolution, I use this simple bash script to resize Firefox to the desire width (defaulting to 800 if nothing is supplied on the command line):


#!/bin/bash

SIZE=${1:-800}
echo Resize to $SIZE width
wmctrl -r "Firefox" -e 0,-1,-1,$SIZE,-1

Simple, but gets the job done quickly!



Hope these are helpful.

Get Lazy, Use Data::Pageset::Render

I have been using the very nice Data::Pageset module for a while now. It makes separating you data into multiple pages very simple, and for very large datasets it has the slide mode which helps keep your pager small (rather than have links to 100 pages, you get links to the first and last page and the five pages around your current page).



Eventually, I got tired of recreating the same html code for each new pager. I first put together a simple Mason component, but found myself copying it between projects and starting to write one for TT. Eventually, I wrote Data::Pageset::Render. The module (which is on CPAN) subclasses Data::Pageset and adds the html method, which returns the html code, complete with links, to create your pager.



Just create your pager object as you would with Data::Pageset adding link_format => '<a href="http://www.blogger.com/q?page=%p">%a</a>' to the constructor, the html method then eliminates all that redundant paging html.


my $pager_html = $pager->html();
# $pager_html is html "<< 1 ... 3 4 5 6 7 ... 10 >>" with appropriate links

# A bit more control over the appearence of the current page:
my $pager_html = $pager->html( '<a href="http://www.blogger.com/q?page=%p">%a</a>', '[%a]' );
# $pager_html is html "<< 1 ... 3 4 [5] 6 7 ... 10 >>" with appropriate links



This works great within a larger framework like TT or Mason:


# In a TT template all the paging html is reduced to just:
[% pager.html() %]

# or in a Mason template:
<% $pager->html() %>



There aren't very many configuration options now. If there is interest I might make some of the controls customizable (ie, the >> to move forward). Any suggestions are more than welcome.

Easy Access to Your Minicpan Repository

I am a big fan of CPANPLUS and minicpan. I like the plugin structure and power of CPANPLUS. ([Warning: shameless plug follows] I have written a simple plugin that allows you to see/install the prereqs for an module with commands like cpanp /prereqs show or cpanp /prereqs install.) And minicpan is great for getting work done on an airplane or when I am away from the net.



One thing I have struggle with in the past is getting cpanp to use either my local minicpan mirror or another mirror other than my default. Editing the config file is not that hard, but it is far to permanent for what I am trying to do. So I wrote two simple scripts (basically tweaked versions of /usr/bin/cpanp) that change the mirror to my local minicpan or a mirror passed on the command line: cpanp-local and cpanp-mirror. Both could be significantly improved, documented and they should probably be combined, but they get the job done.

Facial Detection and Recongition in Perl

I just recently came across the new facial detection features in Picasa, and I have to say I am very impressed. This is a very useful feature and well implemented; once again Google has set the bar. Unfortunately, it is only available in the web version, and I have way too many photos to upload. Further, there seem to be more privacy concerns with a web version of a tool like this.



So, I was very excited when I saw a brief mention of presentation about OpenCV from a Ruhr.pm meeting (thanks to Yanick's blog). Maybe this was the tool to implement facial detection/recognition in my photos locally.



Without going into much detail, OpenCV is a library of routines for facial detection, recognition, and similar computer vision tasks. So far, I have only experimented with the facial detection part and read a bit about the facial recognition routines. There is a handy Perl binding (Image::ObjectDetect) for the facial detection part. With it and reference to the slides from Ruhr.pm, I was able hack together a very simple script to look for and highlight faces in my photo collection.




While there were many false positives and some missed faces, OpenCV and the script show promise. Most of the missed faces were either very small or profiles, and there is another "cascade" (essentially a set of configuration presets to the detection algorithm) that might yield better results for profiles.



Reading about facial recognition from the OpenCV site was a bit underwhelming, though. Apparently OpenCV only supports one method of recognition, Principal Component Analysis. Which sounds like it has some sever limitations. From the site:



However it does have its weaknesses. PCA is translation variant - Even if the images are shifted it wont recognize the face. It is Scale variant - Even if the images are scaled it will be difficult to recognize. PCA is background variant- If you want to recognize face in an image with different background, it will be difficult to recognize. Above all, it is lighting variant- if the light intensity changes, the face wont be recognized that accurate.


Not exactly a great marketing pitch! PCA doesn't sounds like the most suitable method for recognizing faces from arbitrary photos. I haven't played with it yet, but that's on the todo list.



To sum it all up, OpenCV looks like a great tool. I'll update this blog as I continue to experiment with it and my homebrew version of PicasaWeb's facial detection/recognition features. If anyone else has had experience with OpenCV or other facial detection/recognition tools (particularly those with Perl interfaces) I would love to hear about it.

C-Like Pointers In Perl...Oh No!

Tuesday night David Lowe gave a very interesting talk at SF.pm on pack/unpack and some of the awful things you can do with them.1 We ended the meeting talking about whether you could use the pack format "P" (which packs and unpacks "a pointer to a structure (fixed-length string)") to force poor Perl to do C-like pointer arithmetic.



David is using unpack to do a binary search of fixed width blobs of data in order to avoid unserializing it. His current (minor) bottleneck is creating the pack format string dynamically for each step in the binary search (ie, 'x' . ($record_size * $record + 1)). The math is fast, the string concatenation is relatively slow. I wondered if you could use the "P" format to avoid creating the format string on each pass and stick with simple integer arithmetic.



After a bit of hacking, it turns out this can be done. Instead of David's very complicated:




# Create an unpack format to skip the first $record * $record_size
# bytes, then return the next 100 byte null padded string
my $format = 'x' . ( $record_size * $record ) . 'Z100';
# Unpack from our binary blob
my $element = unpack( $format, ${$frozen_haystack_ref} );


You get the nearly unfathomable:



 
# Use pointer arithmetic to calculate where the record is in memory
# and convert the Perl integer into an unsigned long integer
my $ptr = pack( 'L!', $ptr_to_base + $record_size * $record );
# Pull 100 bytes from that spot in memory
my $element = unpack( 'P100', $ptr );


And voila, Perl is doing pointer arithmetic and accessing structures just like C. Unfortunately, unpack("P") won't take a native Perl integer as an argument. You need to use pack("L!") to turn a Perl integer it into a long integer. So we trade the string concatenation in David's code for a pack("L!") in this code. And even worse, string concatenation is about 20% faster than unpack.



So, while this doesn't appear to help David speed up his already cheetah like code, it does prove that you can have pointers in Perl. Of course, you should never ever do anything like this. It is fraught with potential bugs and will drive anyone stuck maintaining your code insane.



Feel free to take apart my ugly benchmarking code. Maybe someone who knows this better can actually save David a few clock-cycles.



--

By the way, thanks to Matt Trout who got me motivated to (re)start blogging about Perl. In the past, I have gotten bogged down by setting up a site rather than focusing on adding content2. This time I decided to let Google do the work for me and focus on the content. Hopefully, this will result in more regular (and interesting?) posts. Feedback is very welcome.



Footnotes:

1. David actually has good reasons to do these horrible things, given some of the performance demands of his code, for the rest of us this is just fun^H^H^Hwrong.
2. Either putting together my own TT based blog/site or trying to get MT to work the way I want.