Project

General

Profile

Actions

GSoC 2013 "Cloud Ready" Project Specification » History » Revision 10

« Previous | Revision 10/17 (diff) | Next »
Robin Mills, 03 Mar 2013 11:51


GSoC 2013 "Cloud Ready" Project Specification

There are four subprojects:

  1. HTTP I/O support (GSoC 2013 Student)
  2. exiv2(.exe) to run as a service (daemon) on a web socket.
  3. client-side use of the exiv2 service (using the web socket)
  4. JSON support

This is quite a large project. Robin Mills intends to implement the daemon/web-socket support during Spring 2013. The GSoC student is expected to implement the http I/O support. The proposal to have a GSoC student join us will be made via KDE http://community.kde.org/GSoC/2013/Ideas#Exiv2_.22Cloud_Ready.22_Project

1 HTTP I/O support (GSoC 2013 Student)

Today we provide support files available on the file system. These files can be memory mapped if this feature is supported by the host OS.

With the increasing interest in "cloud" computing, it's become ever more common for files to reside in remote locations which are not mapped to the file system. Very common cases today are ftp and http. For example: http://bla/bla/bla/file.jpg. Today there are a myriad of "Cloud" storage products, such as AWS, DropBox, Google Drive, Sky Drive, Box,iCloud, Just Cloud and more.

The proposal is to support http, ftp and ssh. This can be done by deriving a new Class from the BasicIO abstract class. The exiv2 command would accept filenames with a URL. For example:

exiv2 -pt http://clanmills.com/files/Robin.jpg
exiv2 -pt ftp://username@password:/clanmills.com/Robin.jpg
exiv2 -pt ssh://username@password:/clanmills.com/Robin.jpg

In most image files, the meta-data is defined in the first 100k of the file, so the implementation should only read blocks on demand from the server and avoid copying the complete file.

The simplest possible implementation of this proposal for exiv2 to detect the protocol and use a helper application such as curl or ssh. This implementation probably requires copying the complete file from the remote storage to a temporary file in the local file system. While such an implementation can be constructed quickly, this does not satisfy the project aim to make efficient use of band-width.

It is very desirable to use a robust implementation of the web protocols and a library such as libcurl should be considered. The selection of the protocol support library must respect build implications. We should be careful to avoid adding a large library (such as boost) to the build dependencies. Additionally, the implementation is required to be written in C++ and run on Mac/Windows/Linux without dependency on platform frameworks such as .Net, Java, or Cocoa. It may be that build switches can be provided to enable Exiv2 to use platform frameworks. This could be especially useful on mobile platforms such as Android and iOS.

The implementation should provide bi-directional support (both read and write) with read-access being the first priority.

2 and 3 Exiv2 daemon server and client

enable exiv2 to run as a service (daemon) on a web socket. I imagine two types of clients:

  1. exiv2 itself of course
  2. JavaScript/WebSocket client

To do this we could do something like this:

Server:      # exiv2 --daemon --port 54321
Client:      $ exiv2 -pt exv://server:54321:/Robin.jpg
Even better: $ exiv2 -pt exv://server:54321:/http://clanmills.com/files/Robin.jpg
I don't want to get into detail concerning the JavaScript API for this. Something like this:
<script src="js/Exiv2.js">
var exiv2     = new Exiv2( { server : 'clanmills.com' , port : 54321 }); 
var metadata  = eval(exiv2.command('--JSON -pt /Robin.jpg '));
// or even better
var metadata  = eval(exiv2.command('--JSON -pt http://clanmills.com/files/Robin.jpg'));
To get the most from this functionality, we should provide JSON (and/or XML) support which I discuss below.

4 JSON Support

5 years ago, I became interested in exiv2 to implement a GeoTagging application. I decided to use Python as an excuse to learn the language. I used the pyexiv2 wrapper, written by Olivier, and the project was a success. Building exiv2 and pyexiv2 on Windows and MacOSX was a challenge (to say the least).

Since then, I've worked steadily on the exiv2 msvc and msvc64 built environments and I believe both are working very well.

Sadly, building pyexiv2 is remains a challenge because it requires boost and the scons build utility. (scons is/was another GSoC project.) The consequence is that my python script seldom uses the latest exiv2 and is not available on all my machines (Windows/Cygwin/Mac/Kubuntu). The script is stable (hardly been changed in 5 years), however building the pyexiv2 wrapper is a maintenance challenge. The pyexiv2 has to be built for specific versions of python (2.6, 2.7 etc), architecture (32/64 bit), platform (windows/cygwin/macosx/linux).

This is not a criticism of Olivier's pyexiv2 wrapper. Olivier has done a very good job. Python wrappers which link C++ are a severe maintenance challenge. I haven't worked for years with Perl's C++ support (XS and/or SWIG), however I anticipate similar pain and trouble.

JSON to the rescue. My proposal is to provide a JSON interface to read and write meta-data in the exiv2 command-line utility.

As an sample application to prove our JSON support, provide wrappers for Perl and Python. The wrappers can be written entirely in the scripting language and use the language's JSON support. There is no need to get involved with C++ integration challenges such as boost/scons/pyexiv2, xs and swig. When reading from files, the wrapper will call exiv2.exe ONCE to capture all JSON to file. When writing to files, the wrapper will call exiv2.exe ONCE. This strategy will enable the wrappers to and run on all platforms on which exiv2.exe is available.

Expected results:

  1. To deploy a webservice to provide Exiv2 services.
  2. To provide a JavaScript library to enable developers use the Exiv2 service.
  3. An engineering assessment of the effort involved in providing access to cloud servers such as AWS.

GSoC Mentor:

Robin Mills http://clanmills.com/files/CV.pdf

I've been a volunteer on the Exiv2 project for 5 years. I worked for Adobe for 10 years, where I implemented reading PDF and JDF files over http (without copying the complete file). I'm now a freelance contractor and I've been working on a mobile app which uses WebSockets. I've worked on both server and client code.

Project Notes:

If you wish the submit a proposal, or discuss this project with me, then please do the following:

  • Confirm with me that you have good C++ skills
  • Download and build Exiv2.

When you read the code, here are some suggestions for matters you may wish to consider:

1) Exiv2 BasicIO abstract class and HttpIO concrete class (for reading)

  • I don’t remember the API, however it has methods to read from stream (open/close/tell/seek,read,write)
  • You should derive a new class from BasicIO, and could be called HttpIO or something like that
  • HttpIO should allocate memory for the complete file and maintain a map of blocks which have been copied from the server
  • When a read is requested, HttpIO should ensure the appropriate blocks have been requested, update the map, return data
  • The copy from the server should use HTTP’s ‘byte range’ to limit limit the number of bytes to be copied
Why do we need to allocate memory for the complete file? I think we just need to allocate enough space for the metadata, don't we?
  1. Some elderly HTTP server don't support "byte range", you have to copy the whole file.
  2. I’m hoping we can use the Memory Mapping IO code. So you populate the memory with data “Just in time”.
  3. Some file formats (eg PDF) are random access and the meta-data can be anywhere in the file. Most JPGs have the meta-data in the first 100k, however we want our code to handle other possibilities.
  4. The map tells us which blocks to transmit to the server when we’ve modified the file.
  5. The map is very simple – an array of bools. I suggest a block size of 8*1024 – however make sure that’s a const that we can tune. You might also want to always prepopulate the first 100k on open. So, when you get the “open” call, you do a 100k GET from the server and you’re in business. Good, eh?

2) HttpIO and writing

  • I’ve never wanted to write "byte-ranges" over http. We need to research this.
  • However the map should maintain the parts of the file which have changed and only send those blocks to the server.

3) Protocol support library

  • I respect libcurl. I believe it supports http/https/ftp/sftp
  • I’m sure it can do ‘byte-ranges” for http/gets.
  • I don’t know if it can do byte-ranges on other protocols

4) Other protocols

  • Other protocols may be possible. (smb, nfs, ssh). Needs to be investigated.
  • Cloud protocols (AWS, DropBox etc). Needs to be investigated.

5) User Interface, test harness and Platforms

Exiv2 is a library. There is no user interface. Exiv2 includes about 20 sample applications which are all command-line programs. The main application is exiv2(.exe) which does many things and is used by the test suite. The test suite is written in bash. On Windows, the test suite is run from Cygwin - however it can test libraries built with Visual Studio as well as GCC and Clang.

All code is required to build, execute and test correctly on the major platforms: Windows/MacOSX/Linux. I will provide help to port from the development system to the others.

6) Some thoughts about implementation

There is a Memory Mapped IO class in Exiv2. I think we can use that to implement the HTTP read stuff. We can allocate memory for the complete file, then populate the memory "just in time" when the user makes a read read.

The priority is to have HTTP/read support (without copying the whole file).
Writing back isn’t so interesting (most HTTP servers don’t allow PUT).
Other protocols are interesting, and the "quick and dirty" solution is to copy the complete file.

This business of only updating those parts of the file which have changed are very effectively implemented by rsync. Perhaps we should investigate if we can incorporate that in our solution. I personally update clanmills.com (which has 6GB/100,000+ files) using rsync (over ssh) and I am always astonished by its speed/reliability.

Notes about prototyping a solution

I) First of all, for all folks interested in contributing to this project, I recommend that you register with the Exiv2 forum AND add a watch to this page. It's my intention to update this page quite frequently. I'll make the same information available to everybody who would like to be involved.

2) If you don't know anything about HTTP's GET verb, HTTP Header and Body - now's the time to learn. Google something up, visit the library, or ask around. Everybody involved in web programming knows about this. So, I'm not going to discuss it here.

3) When you inspect the Exiv2 code base, you discover that concrete class for opening and reading files are derived from the abstract BasicIO class. I recommend that you "instrument" those function with printf (or cout) statements to report when they are called and their arguments. You should be able to run the command exiv2 -pt foo.jpg and you'll see the IO calls being made.

A word of caution: I've never done this for Exiv2. However I know you'll do this and share your result and experience.

4) Download and build curl.
Like Exiv2, curl is both a library and a very useful command-line tool. Have a look at the man page for curl : http://linux.about.com/od/commands/l/blcmdl1_curl.htm and you'll discover the very interesting --range from-to option. You'll also find curl --verbose helpful as it shows you what's being done by curl:

Robins-iMac:temp rmills$ curl http://clanmills.com/files/CV.pdf > CV.pdf
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 38539  100 38539    0     0  91844      0 --:--:-- --:--:-- --:--:--  106k
Robins-iMac:temp rmills$ ls -alt CV.pdf
-rw-r--r--  1 rmills  staff  38539 Mar  1 17:41 CV.pdf

Robins-iMac:temp rmills$ curl --verbose http://clanmills.com/files/CV.pdf > /dev/null 
* About to connect() to clanmills.com port 80 (#0)
*   Trying 173.254.28.62...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* connected
* Connected to clanmills.com (173.254.28.62) port 80 (#0)
> GET /files/CV.pdf HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
> Host: clanmills.com
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 02 Mar 2013 01:41:41 GMT
< Server: Apache
< Last-Modified: Sat, 10 Nov 2012 20:55:11 GMT
< Accept-Ranges: bytes
< Content-Length: 38539
< Vary: Accept-Encoding
< Content-Type: application/pdf
< 
{ [data not shown]
100 38539  100 38539    0     0  91458      0 --:--:-- --:--:-- --:--:--  105k
* Connection #0 to host clanmills.com left intact
* Closing connection #0

Robins-iMac:temp rmills$ curl http://clanmills.com/files/CV.pdf | od -a | head

0000000   %   P   D   F   -   1   .   4  nl   %   G   l  si   "  nl   5
0000020  sp   0  sp   o   b   j  nl   <   <   /   L   e   n   g   t   h
0000040  sp   6  sp   0  sp   R   /   F   i   l   t   e   r  sp   /   F
0000060   l   a   t   e   D   e   c   o   d   e   >   >  nl   s   t   r
0000100   e   a   m  nl   x  fs   m   ]   k dc3   ^   6   u  rs   X   q
0000120   k   *  gs   6   i   R   t   ^   Y  em   N   d   ^   7   U   R
0000140 eot   H stx   $   ?   U   w   H   5  gs   X   Z   $  gs  ht   {
0000160   A   +   Y dc2   b   ]   V syn   V   6   r   3   |   !   ?   7
0000200 bel   $ soh   s   @   <  sp   x   .   V  so   ;   S   [   c   a
0000220   >   $   A   \   N   }   <   8   x   r   (   . dc4   >   *   ]

Robins-iMac:temp rmills$ curl --range 23-60 http://clanmills.com/files/CV.pdf 
<</Length 6 0 R/Filter /FlateDecode>>
Robins-iMac:temp rmills$ 

4) And how, you might be able to implement the HttpIO class!

You know how to get the length of the file (it's in the Content-Length: header). You'll find the HTTP verb HEAD is designed to provide this information. And you know how to read a range of bytes!

If you download the file http://clanmills.com/LargsPanorama.jpg:

1008 rmills@rmills-linux:/Windows/Users/rmills/clanmills $ cd ~/temp
1009 rmills@rmills-linux:~/temp $ curl http://clanmills.com/LargsPanorama.jpg > Largs.jpg
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  485k  100  485k    0     0   115k      0  0:00:04  0:00:04 --:--:--  126k
1010 rmills@rmills-linux:~/temp $ open Largs.jpg 
1011 rmills@rmills-linux:~/temp $ exiv2 -pt Largs.jpg
Exif.Image.Orientation                       Short       1  top, left
Exif.Image.XResolution                       Rational    1  72
Exif.Image.YResolution                       Rational    1  72
Exif.Image.ResolutionUnit                    Short       1  inch
Exif.Image.Software                          Ascii      29  Adobe Photoshop CS Macintosh
Exif.Image.DateTime                          Ascii      20  2007:01:28 11:28:40
Exif.Image.ExifTag                           Long        1  164
Exif.Photo.ColorSpace                        Short       1  Uncalibrated
Exif.Photo.PixelXDimension                   Long        1  2160
Exif.Photo.PixelYDimension                   Long        1  345
Exif.Thumbnail.Compression                   Short       1  JPEG (old-style)
Exif.Thumbnail.XResolution                   Rational    1  72
Exif.Thumbnail.YResolution                   Rational    1  72
Exif.Thumbnail.ResolutionUnit                Short       1  inch
Exif.Thumbnail.JPEGInterchangeFormat         Long        1  302
Exif.Thumbnail.JPEGInterchangeFormatLength   Long        1  1688
1012 rmills@rmills-linux:~/temp $ 

The aim of the project is to write an new HttpIO class (derived from BasicIO), so that the command:

exiv2 -pt http://clanmills.com/LargsPanorama.jpg produces the same output as above.

The "quick and dirty" solution is when "open" is called, you use curl to download the file to /tmp/LargsPanorama.jpg, then delegate everything to the FileIO class. Works? Of course! Efficient? No! You copied the whole file.

Here are some things to think about:
  1. The clever solution is to use the byte-range feature of curl to copy only those bytes actually requested by Exiv2.
  2. We don't want to invoke external programs like curl. We want to link and call libcurl for ourselves.
  3. What happens if exiv2 requests the same bytes more than once? We want to cache them of course.
  4. What about other protocols: FTP/SSH and so on. Well, that's what the projects about.

I hope this all makes sense. I know you'll ask me when you're confused.

This photo is of the beautiful town of Largs in Scotland where I was born.

Updated by Robin Mills over 8 years ago · 10 revisions