Ignore this unless you’re using a Rack handler and ActiveRecord.
If you’re using a vanilla rack handler or Grape or JSONRPC2 or something similar that accesses your database via ActiveRecord, and you’re mounting it directly in Rack
you’ll probably benefit from using the ActiveRecord::QueryCache. Unless you’re going through the rails stack, you don’t get this for free - you have to ask.
1
useActiveRecord::QueryCache
It’s just a standard piece of Rack Middleware, but it turns on DB
caching for the duration of the request.
e.g.
12345
map'/foo/bar'douseRack::LoggeruseActiveRecord::QueryCache# <-- this increased the speed of my API calls by ~20%runMyRackHandlerend
Was pondering the question: what code runs when method level rescue, else and ensure are used in ruby?
TL;DR summary
123456789
defsome_method# main bodyrescue# rescue codeelse# alternative to rescueensure# always run me lastend
Without return the last computed value that is not in the ensure block is returned (this will either be the main body, the rescue block or the else block).
Using return in the main body of the method means that else block doesn’t run.
Using return in an ensure block always overrides any other value returned by the method, regardless of whether any other section of the method also used the return keyword.
Values from an ensure block are only ever returned when the return keyword is used.
Until issue 704 is resolved, Passenger Standalone won’t compile properly on Ubuntu 11.10 (Oneirc Ocelot - currently pre-release) using the default settings.
To work around this, use GCC 4.4 instead. You’ll need to install gcc-4.4 and libstdc++6-4.4-dev and then specify GCC 4.4 at compile time using the CC environment variable.
The standard answer is that zip files can’t contain more than one copy of a file without containing more than one copy of a file. In other words, there’s not a portable version of a *nix style hard link.
And that’s kind of true. However it is theoretically possible to create valid zip files that violate this principle in a platform independant manner. Unfortunately this doesn’t work properly with Stuffit :(
The data for a file entry must start immediately following the header, but the header can be upto ~65k and ends with fields that should be ignored if they are not understood. So we can stuff a local file header inside the end of a parent local file header (and prefix 32 bytes of “unknown” extra field) so that we have two valid local file headers that each end immediately before the only copy of the file data, as pictured:
And then we add the entries to Central Directory as if they were normal file entries.
Tests work fine with Info-ZIP, 7-Zip and the Windows built-in zip support. Unfortunately Stuffit on OS X only appears to recognise the “normal” entries (ie. doesn’t extract the embedded headers).
I wanted to stream zip files with lots of JPEGs in. Hundreds of JPEGs from digital cameras and, being as they were JPEGs, didn’t really care about trying to compress them any further.
I wanted to create (potentially) huge archives. So I’d need something that supported ZIP64 extensions.
I wanted to mix local files and files streamed from internal web servers.
I wanted to create a zip file on the fly, with minimal buffering, to minimize disk and memory requirements.
I wanted to support large numbers of simultaneous downloads.
I also wanted (if possible) to efficiently include the same file more than once in an archive with different filenames.
I wanted to continue to use zip archives.
Simples?
If only.
Streaming ZIP64 support (or lack thereof)
There are several ruby zip libraries, e.g. rubyzip, zip-ruby and archive-zip - but they seem to fall into two camps: pure ruby with no ZIP64 or wrapping a C library (e.g. libzip) but with no obvious way to create a zip file and start streaming it before it’s complete.
So I indulged my NIH syndrome reflex and wrote zip64writer which streams zip files and can automatically starts using ZIP64 extensions when needed. (I did look at adding ZIP64 support to rubyzip, but I figured fairly quickly that it would be easier to roll a specifically targetted library than adapt it to my needs.)
So writing a zip file to a stream works something like:
1234567891011
require'zip64/writer'File.open("output.zip","wb")do|fp|Zip64::ZipWriter.new(fp)do|zip|File.open("sample.jpg","rb")do|rfp|zip.add_entry(rfp,:mtime=>Time.now,:name=>'myphoto.jpg')endend# Implicit close writes central directory to streamend
ZIP64 extensions are extra header fields, and an extra couple of blocks at the end of the zip file, which allow zip files to contain more than 65,535 entries (the limit of a 16bit integer) & for the zip archives (and the files inside them) to be greater than 4 Gb (the limit of a 32bit integer) in size.
The writer detects when an offset requires a 64bit integer (ie. offset >4Gb) and automatically starts using ZIP64 extensions - so the files are still as compatible as possible with old zip implementations that don’t support ZIP64 (e.g. Windows XP shell).
Basic testing reveals that ZIP64 files created this way (ie. a mix of standard encoding and ZIP64 encoding) work fine on Windows 7, OS X 10.5+. (Also the version of file-roller shipped with Lucid Lynx opens them fine, although the version of zip shipped with Hardy Heron is too old.)