AirPrint on Macbook + Dropbox = Add a printer to iPhone

Sometimes it is desirable to have a webpage saved on your iPhone. On PCs and Macs, this can be done by feeding the page to a PDF printer. How to achieve it on iPhone?

If you have a Macbook and iPhone within a network, the following trick is helpful.

First of all, as Mac OS X 10.6.6 still does not support AirPrint, we need to enable it.
This link provides a method to set up AirPrint on a Macbook and use it through iPhone.

Next, assume your Dropbox directory is /Users/username/Dropbox. Create a symbolic link from the CUPS-PDF directory to the Dropbox directory.

ln -s /Users/Shared/CUPS-PDF/username /Users/username/Dropbox/airprint

This makes sure that newly printed documents are synced by Dropbox. Enjoy it!

Advertisements

Extending Python with Swig

Check interface files:

swig -swiglib

gcc -bundle `python-config –ldflags` example.o example_wrap.o -o _example.so

Erase method in STL

Erase is a method in STL container classes. Given a range, this method removes elements within the iterator index [start, end). Once an element is removed, all indices of the subsequent elements are changed. This means the method takes O(n) time.

(The following code was copied verbatim from cplusplus.)

// erasing from list
#include <iostream>
#include <list>
using namespace std;

int main ()
{
  unsigned int i;
  list<unsigned int> mylist;
  list<unsigned int>::iterator it1,it2;

  // set some values:
  for (i=1; i<10; i++) mylist.push_back(i*10);

                              // 10 20 30 40 50 60 70 80 90
  it1 = it2 = mylist.begin(); // ^^
  advance (it2,6);            // ^                 ^
  ++it1;                      //    ^              ^

  it1 = mylist.erase (it1);   // 10 30 40 50 60 70 80 90
                              //    ^           ^

  it2 = mylist.erase (it2);   // 10 30 40 50 60 80 90
                              //    ^           ^

  ++it1;                      //       ^        ^
  --it2;                      //       ^     ^

  mylist.erase (it1,it2);     // 10 30 60 80 90
                              //        ^

  cout << "mylist contains:";
  for (it1=mylist.begin(); it1!=mylist.end(); ++it1)
    cout << " " << *it1;
  cout << endl;

  return 0;
}

Regression vs. Classification (repost)

(originally from this post)

For learning reductions we have been concentrating on reducing various complex learning problems to binary classification. This choice needs to be actively questioned, because it was not carefully considered.

Binary classification is learning a classifier c: \mathbf X \rightarrow \{0,1\} so as to minimize the probability of being wrong, \text{Pr}_{\mathbf x,y \sim D}(c(\mathbf x) \sim y).

The primary alternative candidate seems to be squared error regression. In squared error regression, you learn a regressor s: \mathbf X \rightarrow \left[0,1\right] so as to minimize squared error, \mathbb E_{\mathbf x,y \sim D} (s(\mathbf x)-y)^2.

It is difficult to judge one primitive against another. The judgement must at least partially be made on nontheoretical grounds because (essentially) we are evaluating a choice between two axioms/assumptions.

These two primitives are significantly related. Classification can be reduced to regression in the obvious way: you use the regressor to predict D(y=1|\mathbf x), then threshold at 0.5. For this simple reduction a squared error regret of r implies a classification regret of at most r^{0.5}. Regression can also be used to reduce to classification using the Probing algorithm. (This is much more obvious when you look at an updated proof.) Under this reduction, a classification regret of r implies a squared error regression regret of at most r.

Both primitives enjoy a significant amount of prior work with (perhaps) classification enjoying more work in the machine learning community and regression having more emphasis in the statistics community.

The (nonhistoric) reasons for preferring classification seem to be:

Aesthetically, what could be a more natural primitive than predicting a single bit?
Classification is (potentially) inherently more representationally concise. When translated into transistors, a classification algorithm might use fewer transistors than a regressor, simply because the natural representation is bits rather than reals (~= floats).
There are several reasons to consider regression over classification:

More uniform convergence. For a single squared error regressor, the rate of convergence of the estimate of squared error to the expected squared error goes as 1/m, assuming IID samples. For a single binary classifier, the rate of convergence of the estimate of binary loss to the expected binary loss goes as a function varying between 1/m and 1/m^{0.5}.
There is a computational inequivalence between the primitives, as far as we know. In particular, the probing algorithm must call a classifier several times in several ways to make a high precision regression prediction. On the other hand, classification via regression requires one call to the underlying regressor.
Squared error regression learning is often easier than 0/1 classification learning. This is because squared error regression is convex, but 0/1 loss is not. Note: this does not imply that squared error regression is convex (It isn’t for general regression algorithms). Instead, it just means that nonconvexity is not enforced by the loss function.
The mathematical evidence points toward squared error regression as a better primitive, although doubts still seem reasonable to entertain.

Just upgraded

Just upgraded this blog to enable audio play.