Fazal Majid's low-intensity blog

Sporadic pontification

Fazal Fazal

The real story behind the WSIS

There has been much speculation recently about a possible rift in Internet governance. Essentially, many countries resent the US government’s control over the Internet’s policy oversight. They advocate the transfer of those responsibilities to the International Telecommunications Union (ITU), a more multilateral venue. The big news is that the European Union, which previously sat on the fence, came out strongly in favor of this proposal. Unsurprisingly, the US government is hostile to it. More surprisingly, I agree with their unilateralist impulse, obviously for very different reasons. I was planning on writing up a technical explanation as most of the IT press has it completely wrong, as usual, but Eric Rescorla has beaten me to the punch with an excellent summary.

Many commentators have made much hay of the fact the ITU is under the umbrella of the United Nations. The Bush administration is clearly reticent, to say the least, towards the UN, but that is a fairly widespread sentiment among the American policy establishment, by no means limited to Republicans. For some reason, many Americans harbor the absurd fear that somehow the UN is plotting against US sovereignty. Of course, the reality is the UN cannot afford its parking tickets, let alone black helicopters. American hostility towards the UN is curious, as it was the brainchild of a US president, Franklin D. Roosevelt, its charter was signed in San Francisco (at Herbst Theatre, less than a mile from where I live), and it is headquartered in New York.

The UN is ineffective and corrupt, but that is because the powers on the Security Council want it that way. The UN does not have its own army and depends on its member nations, specially those on the Security Council to perform its missions. It is hardly fair to lay the blame for failure in Somalia on the UN’s doorstep. As for corruption, mostly in the form of patronage, it was the way the US and the USSR greased the wheels of diplomacy during the Cold War, buying the votes of tin-pot nations by granting cushy UN jobs to the nephews of their kleptocrats.

A more damning condemnation of the UN is the fact the body does not embody any kind of global democratic representation. The principle is one country, one vote. Just as residents of Wyoming have 60 times more power per capita in the US Senate than Californians, India’s billion inhabitants have as many votes in the General Assembly as those of the tiny Grand Duchy of Liechtenstein. The real action is in the Security Council anyways, but they are not fully represented there either. Had Americans not had a soft spot for Chiang Kai-Shek, China, with its own billion souls, would not have a seat at that table either. That said, the Internet population is spread unevenly across the globe, and the Security Council is probably more representative of it.

In any case, the ITU was established in 1865, long before the UN, and its institutional memory is much different. It is also based in Geneva, like most international organizations, geographically and culturally a world away from New York. In other words, even though it is formally an arm of the UN, the ITU is in practice completely autonomous. The members of the Security Council do not enjoy veto rights in the ITU, and the appointment of its secretary general, while a relatively technocratic and unpoliticized affair, is not subject to US approval, or at least acquiescence, the way the UN secretary-general’s is, or that of more sensitive organizations like the IAEA.

My primary objections to the ITU are not about its political structure, governance or democratic legitimacy, but about its competence, or more precisely the lack of it. The ITU is basically the forum where government PTT monopolies meet incumbent telcos to devise big standards and blow big amounts of hot air. Well into the nineties, they were pushing for a bloated network architecture called OSI, as an alternative to the Internet’s elegant TCP/IP protocol suite. I was not surprised — I used to work at France Télécom’s R&D labs, and had plenty of opportunity to gauge the “caliber” of the incompetent parasites who would go on ITU junkets. Truth be said, those people’s chief competency is bureaucratic wrangling, and like rats leaving a ship, they have since decamped to the greener pastures of the IETF, whose immune system could not prevent a dramatic drop in the quality of its output. The ITU’s institutional bias is towards complex solutions that enshrine the role of legacy telcos, managed scarcity and self-proclaimed intelligent networks that are architected to prevent disruptive change by users on the edge.

When people hyperventilate about Internet governance, they tend to focus on the Domain Name System, even though the real scandal is IPv4 address allocation, like the fact Stanford and MIT each have more IP addresses allocated to them than all of China. Many other hot-button items like the fight against child pornography or pedophiles more properly belongs in criminal-justice organizations like Interpol. But let us humor the pundits and focus on the DNS.

First of all, the country-specific top-level domains like .fr, .cn or the new kid on the block, .eu, are for all practical purposes already under decentralized control. Any government that is afraid the US might tamper with its own country domain (for some reason Brazil is often mentioned in this context) can easily take measures to prevent disruption of domestic traffic by requiring its ISPs to point their DNS servers to authoritative servers under its control for that zone. Thus, the area of contention is really the international generic top-level domains (gTLDs), chief of all .com, the only one that really matters.

What is the threat model for a country that is distrustful of US intentions? The possibility that the US government might delete or redirect a domain it does not like, say, al-qaeda.org? Actually, this happens all the time, not due to the malevolence of the US government, but to the active incompetence of Network Solutions (NSI). You may recall NSI, now a division of Verisign, is the entrenched monopoly that manages the .com top-level domain, and which has so far successfully browbeaten ICANN into prolonging its monopoly, one of its most outrageous claims being that it has intellectual property rights to the .com database. Their security measures, on the other hand, owe more to Keystone Kops, and they routinely allow domain names like sex.com to be hijacked. Breaking the NSI monopoly would be a worthwhile policy objective, but it does not require a change in governance, just the political will to confront Verisign (which, granted, may be more easily found outside the US).

This leads me to believe the root cause for all the hue and cry, apart from the ITU angling for relevance, may well be the question of how the proceeds from domain registration fees are apportioned. Many of the policy decisions concerning the domain name system pertain to the creation of new TLDs like .museum or, more controversially, .xxx. The fact is, nobody wakes up in the middle of the night thinking: “I wish there were a top-level domain .aero so I could reserve a name under it instead of my lame .com domain!”. All these alternative TLDs are at best poor substitutes for .com. Registrars, on the other hand, who provide most of the funding for ICANN, have a vested interest in the proliferation of TLDs, as that gives them more opportunities to collect registration fees.

The resistible ascension of the smartphone

I bought a Nokia 6682 phone a couple of weeks ago, as an upgrade for my Nokia 6230. Actually, I have my parents signed up on my service plan, and I was planning on sending them the 6230 to replace an older phone they lost, and taking advantage of this as an excuse to upgrade… The 6682 is a Symbian “smartphone” sporting Nokia’s Series 60 UI, and I was influenced by rave reviews like Russell Beattie’s. In recent years, Nokia has been churning out phones with crackpot designs and dubious usability for coolness’ sake. There must have been a customer backlash, as their recent phones like the 6682 have a much more reasonable, reassuringly boring but functional design. Another reason is that Apple’s iSync only works with Nokia’s Series 60 phones, and it will sync photos from the OS X address book.

I returned the phone for a refund last Friday, because the ergonomics are simply atrocious, and from a usability point of view it was actually an unacceptable downgrade from the Series 40 (non-Symbian) Nokia 6230. The low-res 176×208 screen has significantly lower information density than the 320×480 or 640×480 screens now standard on most PDAs, and makes web browsing almost useless. The only thing it has going for it is a semi-decent camera.

Even basic functionality like the address book is poorly implemented. When you scroll down your contacts list, you can select one to decide whether you want to reach them on their home or mobile number. The problem is, the next time you want to make a call and access the address book, you do not start afresh, but still in the list of contacts for the previous contact, making you back out. Let’s not even mention the ridiculously complex key sequence required to record a voice memo.

I have to contrast this with my Palm Tungsten T3, in my book still the best PDA ever (specially compared to the underwhelming, plasticky T5 or the boat-anchor and RAM-starved Lifedrive). Recording a voice memo merely requires pressing and holding a dedicated button, something that can be done one-handed by touch alone. Palm’s address book quick look up scrolling algorithm is a model of efficiency yet to be matched on any phone I have ever used. PalmOS might be getting long in the tooth, specially as regards multitasking, and its future is cloudy, but it still has a serious edge in usability. This is not by accident — Palm paid as much attention to the user interface as Apple did in its day, as this anecdote by New York Times technology columnist David Pogue illustrates:

I once visited Palm Computing in its heyday. One guy I met there introduced himself as tap counter. It was his job to make sure that no task on the PalmPilot required more than three taps of the stylus on the screen. More than three steps, and the feature had to be redesigned. Electronics should save time, not waste it.

In retrospect, I should not have been surprised by the 6682’s poor ergonomics, they were readily apparent from day one. The device is neither a good phone, nor an even halfway acceptable PDA. I decided to give it a chance, thinking it could just be a question of settling into an unfamiliar user interface. I did not have as long an adaptation period when moving from my old T68i to the 6230, and after two weeks my dim initial opinion of the Series 60 had if anything deteriorated further. Russell Beattie can dish it, but he can’t take it. In hindsight, Beattie’s defensiveness about smart people preferring dumb phones over jack-of-all-trades devices was not a good sign.

Pundits have been predicting the demise of the PDA at the hands of the smartphone for many years. Phones certainly outsell PDAs by a handy margin, but a small minority of them are smartphones, and I suspect most people get them for the improved cameras and disregard the unusable advanced functionality. I tend to agree with this old but still valid assessment — the best option is to have a decent PDA in hand, connected to the cell phone in your pocket via Bluetooth.

I suspect the smartphones’ ergonomic shortcomings are structural, not just due to lack of usability skills on the manufacturers’ part. Nokia knows how to design good user interfaces, like Navi or Series 40, but the situation with Series 60 is not going to be rectified anytime soon. The reason for this is that most people buy their cell phones with a subsidy that is paid back over the duration of a 1 or 2 year minimum duration contract. This control over distribution allows the mobile operators ultimate say over the feature set. This is most visible in branding elements like Cingular’s “Media store” icon that flogs overpriced garbage like downloadable ring tones.

To add injury to insult, deleting those “features” is disabled, so they keep hogging scarce memory and screen real estate. Carriers also disable features that would allow people to use their phones without being nickel-and-dimed for expensive intelligent network services like MMS, like some Bluetooth functionality or the ability to send photos over email rather than MMS. It is highly likely carriers will fight tooth-and-nail against the logical inclusion of WiFi and VoIP in future handsets. This conflict of interest between carriers and users won’t be resolved until regulators compel them to discontinue what is in effect a forced bundling practice.

Mobile carriers, like their Telco forebears, seem to believe if they piss on something, it improves the flavor… This is also the reason why I think mobile operator cluelessness about mobile data services is terminal — they keep pushing their failed walled-garden model of WAP services using phones, and gouge for the privilege of using a PDA or laptop to access the real Internet via Bluetooth, while at the same time not deigning to provide any support. WiFi may not be an ideal technology, specially in terms of ubiquity, but as long as carriers make us unwashed users jump through hoops to be allowed access to their data networks, low-hassle WiFi access using a PDA will be the superior, if intermittent alternative to a data-enabled phone. As for the aborted phone upgrade, I guess I will just wait for the Nokia 6270 to hit these blighted shores.

Here, take my money. Please. Pretty please?

Eighty percent of success is showing up. — Woody Allen

My company, Kefta, helps its clients, usually Fortune 500 companies with e-commerce operations, improve their online conversion rates. We typically increase sales by 10–20%. This is not rocket science, more akin to Retail 101, simple things like modifying pages to stop showing offers for products we know the user has already purchased, or making offers more relevant when we know the prospect is interested in a specific product (e.g. because they come from Google after searching for that keyword).

Sometimes I wonder if what we are doing is not too sophisticated by far, when I see particularly boneheaded practices at places that really should know better. Dell is often touted as a model of logistical and operational excellence, and for being a web-centric company. My experience is that many products they carry are not listed on the web site and can only be ordered by phone. You also have to phone to get a discount.

Despite being a telecoms engineer by training, I loathe phones. Phones are great for keeping an emotional connection with friends and family, but are a staggeringly inefficient form of communication for business purposes. They do not leave an audit trail, and even when they do (my voice mail system automatically forwards them to me by email as a MIME-encoded WAV attachment), they hog disk space and are not searchable. You can scan an email in a few seconds, but are forced to listen to voice mail at whatever pace it was dictated. Well, at least with WAV attachments, I can skip back to write down a phone number without having to replay the whole message.

Coming back to Dell, I recently needed to buy a Gigabit Ethernet switch from them. I sent an email to my rep, which he promptly ignored. I tried calling, at least 4 or 5 times, but my only option was voice-mail jail. In the end, I passed the buck to a junior colleague, who tried to leave voice mail and discovered he couldn’t because it was full. With persistence, he managed to get Dell to condescend to taking our order. No customer should have to go through so many hoops just so the vendor can take their money.

I am ragging on Dell, but most IT vendors do as poorly. I can understand expensive support calls receiving lower priority and resources than sales calls — after all, the company already has your money. Not having their act together for the simple matter of order-taking simply boggles the mind. Workflow systems, automatic call distributors and other technologies designed to prevent this have been available for many years. It looks like nobody has bothered to go through the user experience, even though these bugs (and many other glaring deficiencies like session timeouts) could be caught by the most cursory of inspections.

Dell sends an automated satisfaction survey after a sale. Unlike the order-taking process, the survey follows up if you do not respond… That said, it is the usual worthless multiple-choice question format asking me to answer irrelevant questions on a scale of 1 to 10. I don’t recall if the form had a box for free-form comments, but even if it did, the survey design is not-so-subtly signaling that no human is ever going to read what you type there, and thus it is not worth the effort to fill it. The numeric answers are probably going to be collated by an automated report nobody pays any attention to anymore, because garbage-in, garbage-out.

If you are serious about customer feedback, make it open and free-form, and make sure each and every feedback is read by a human (they come quite cheap in the Midwest and the developing world). They should be acknowledged personally (not with an automated reply) and followed through until the issue is either resolved or a decision is taken not to implement the changes suggested (because they are too expensive, impractical or whatever other reason). In both cases, inform the user who bothered to give feedback — most large companies pay a fortune in market research while at the same time ignoring the free (and usually very valuable) insights submitted by their customers. Granted, you cannot always resolve every complaint by unreasonable customers, but feedback on process issues should always be taken into consideration.

Sometimes dropped orders are due to active incompetence rather than careless neglect. While implementing a campaign for one of our clients, we realized there was a bug in one of their ordering forms that would cause them to drop an order. Our software sits on top of the client’s website and monitors it precisely for exception cases like these, and we told them we could, at no extra charge for them, send the dropped order details to an email address of their choosing so the order could be re-entered manually. They declined our offer for various reasons related to internal politics and trade union issues, essentially they were refusing to bend down and pick up money lying on the floor (our estimate was they were losing tens to hundreds of thousands of dollars of customer lifetime value every month due to inaction).

You don’t have to endure a multi-million dollar ERP or CRM implementation to improve follow-through. Where there is a will, there is a way, and a little creative thinking will usually find a work-around that can get the job done until a more robust solution can be deployed. One of our clients, a major bank, was in the early stages of developing their e-commerce, and simultaneously in the throes of a Siebel implementation. Their online forms would simply send an email to a branch office for manual processing. We were implementing a satisfaction survey for them, and offered to send an email automatically to a supervisor if the customer’s order had not been processed, at least until Siebel came on-line. Poor man’s workflow, but email workflows are often quite effective, specially for remedial situations like these.

As I mentioned, sometimes I think I am in the wrong business, and should instead start a consultancy to teach some clue to large companies that have grown complacent. But then again, that is assuming somebody cares, beyond paying lip service to Customer Relationship Management. There is no point in setting up complex systems to build a lifelong relationship with repeat customers if you can’t even take their orders in the first place.

Temboz 0.7 released

I have released version 0.7 of Temboz. The main improvements in the new version are a better user interface, ad filtering, and garbage collection of articles older than 6 months. Several facilities have also been added to make it easier to write and test filtering rules – you can now add comments to a rule, or purge and reload a feed from the feed details page to see if changes rules are kicking in or not.

Temboz now also has a publicly accessible CVStrac with a documentation Wiki and a bug-tracking database (where change requests can also be submitted). The Wiki is publicly read-only for now, but if you would like to contribute to it, drop me an email and I will create an account with edit privileges for you.

The megapixel myth revisited

Introduction

As my family’s resident photo geek, I often get asked what camera to buy, specially now that most people are upgrading to digital. Almost invariably, the first question is “how many megapixels should I get?”. Unfortunately, it is not as simple as that, megapixels have become the photo industry’s equivalent of the personal computer industry’s megahertz myth, and in some cases this leads to counterproductive design decisions.

A digital photo is the output of a complex chain involving the lens, various filters and microlenses in front of the sensor, and the electronics and software that post-process the signals from the sensor to produce the image. The image quality is only as good as the weakest link in the chain. High quality lenses are expensive to manufacture, for instance, and often manufacturers skimp on them.

The problem with megapixels as a measure of camera performance is that not all pixels are born equal. No amount of pixels will compensate for a fuzzy lens, but even with a perfect lens, there are two factors that make the difference: noise and interpolation.

Noise

All electronic sensors introduce some measure of electronic noise, among others due to the random thermal motion of electrons. This shows itself as little colored flecks that give a grainy appearance to images (although the effect is quite different from film grain). The less noise, the better, obviously, and there are only so many ways to improve the signal to noise ratio:

  • Reduce noise by improving the process technology. Improvements in this area occur slowly, typically each process generation takes 12 to 18 months to appear.
  • Increase the signal by increasing the amount of light that strikes each sensor photosite. This can be done by using faster lenses or larger sensors with larger photosites. Or by only shooting photos in broad daylight where there are plenty of photons to go around.

Fast lenses are expensive to manufacture, specially fast zoom lenses (a Canon or Nikon 28-70mm f/2.8 zoom lens costs well over $1000). Large sensors are more expensive to manufacture than small ones because you can fit fewer on a wafer of silicon, and as the likelihood of one being ruined by an errant grain of dust is higher, large sensors have lower yields. A sensor twice the die area might cost four times as much. A “full-frame” 36mm x 24mm sensor (the same size as 35mm film) stresses the limits of current technology (it has 4 times the die size of the latest-generation “Sandy Bridge” quad-core Intel Core i7), which is why the cheapest full-frame bodies like the Canon EOS 5DmkII or Nikon D700 cost $2,500, whereas a DSLR with an APS-C sized sensor (that has 40% the surface area of a full-frame sensor) can be had for under $500. Larger professional medium-format digital backs can easily reach $25,000 and higher.

This page illustrates the difference in size of the sensors on various consumer digital cameras compared to those on some high-end digital SLRs. Most compact digital cameras have tiny 1/1.8″ or 2/3″ sensors at best (these numbers are a legacy of TV camera tube ratings and do not have a relationship with sensor dimensions, see DPReview’s glossary entry on sensor sizes for an explanation).

For any given generation of cameras, the conclusion is clear – bigger pixels are better, they yield sharper, smoother images with more latitude for creative manipulation of depth of field. This is not true across generations, however, Canon’s EOS-10D had twice as many pixels as the two generations older EOS-D30 for a sensor of the same size, but it still manages to have lower noise thanks to improvements in Canon’s CMOS process. Current bodies like the 7D have 6 times the pixels of the D30 while still having better noise levels, the benefits of 10 years of progress from sensor engineers.

The problem is, as most consumers fixate on megapixels, many camera manufacturers are deliberately cramming too many pixels in too little silicon real estate just to have megapixel ratings that look good on paper. The current batch of point-and-shoot cameras cram 14 million pixels in tiny 1/2.3″ sensors. Only slightly less egregious, the premium-priced Canon G12 puts 10.1M pixels in a 1/1.7″ sensor, the resulting photosites are 1/10 the size of those on the similarly priced 10 megapixel Nikon D3000. Canon Digital Rebel T3i (EOS-D600) and 1/16 of those on the significantly more expensive 21MP Canon 5DmkII.

Predictably, the noise levels of the G12 are poor in anything but bright sunlight, just as a “150 Watts” ghetto blaster is incapable of reproducing the fine nuances of real music. The camera masks this with digital signal processing that conceals noise by smoothing pictures, thus smudging noise but also removing the details those extra megapixels were supposed to deliver. The DSLR will yield far superior images in most circumstances, but naive purchasers could easily be swayed by the 2 extra megapixels into buying the inferior yet overpriced Sony product. Unfortunately, there is a Gresham’s law at work and manufacturers are still racing to the bottom, although: Nikon and Canon have also introduced 8 megapixel cameras with tiny sensors pushed too far. You will notice that for some reason camera makers seldom show sample images taken in low available light…

Interpolation

Interpolation (along with its cousin, “digital zoom”) is the other way unscrupulous marketers lie about their cameras’ real performance. Fuji is the most egregious example with its “SuperCCD” sensor, that is arranged in diagonal lines of octagons rather than horizontal rows of rectangles. Fuji apparently feel this somehow gives them the right to double the pixel rating (i.e. a sensor with 6 million individual photosites is marketed as yielding 12 megapixel images). You can’t get something for nothing, this is done by guessing the values for the missing pixels using a mathematical technique named interpolation. This makes the the image look larger, but does not add any real detail. You are just wasting disk space storing redundant information. My first digital camera was from Fuji, but I refuse to have anything to do with their current line due to shenanigans like these.

Most cameras use so-called Bayer interpolation, where each sensor pixel has a red, green or blue filter in front of it (the exact proportions are actually 25%, 50% and 25% as the human eye is more sensitive to green). An interpolation algorithm reconstructs the three color values from adjoining pixels, thus invariably leading to a loss of sharpness and sometimes to color artifacts like moiré patterns. Thus, a “6 megapixel sensor” has in reality only 1.5-2 million true color pixels.

A company called Foveon makes a distinctive sensor that has three photosites stacked vertically in the same location, yielding more accurate colors and sharper images. Foveon originally took the high road and called their sensor with 3×3 million photosites a 3MP sensor, but unfortunately they were forced to align themselves with the misleading megapixel ratings used by Bayer sensors.

Zooms

A final factor to consider is the zoom range on the camera. Many midrange cameras come with a 10x zoom, which seems mighty attractive in terms of versatility, until you pause to consider the compromises inherent in a superzoom design. The wider the zoom range, the more aberrations and distortion there will be that degrade image quality, such as chromatic aberration (a.k.a. purple fringing), barrel or pincushion distortion, and generally lower resolution and sharpness, specially in the corners of the frame.

In addition, most superzooms have smaller apertures (two exceptions being the remarkable constant f/2.8 aperture 12x Leica zoom on the Panasonic DMC-FZ10 and the 28-200mm equivalent f/2.0-f/2.8 Carl Zeiss zoom on the Sony DSC-F828), which means less light hitting the sensor, and a lower signal to noise ratio.

A reader was asking me about the Canon G2 and the Minolta A1. The G2 is 2 years older than the A1, and has 4 million 9 square micron pixels, as opposed to 5 million 11 square micron sensors, and should thus yield lower image quality, but the G2’s 3x zoom lens is fully one stop faster than the A1’s 7x zoom (i.e. it lets twice as much light in), and that more than compensates for the smaller pixels and older sensor generation.

Recommendations

If there is a lesson in all this, it’s that unscrupulous marketers will always find a way to twist any simple metric of performance in misleading and sometimes even counterproductive ways.

My recommendation? As of this writing, get either:

  • An inexpensive (under $400, everything is relative) small sensor camera rated at 2 or 3 megapixels (any more will just increase noise levels to yield extra resolution that cannot in any case be exploited by the cheap lenses usually found on such cameras). Preferably, get one with a 2/3″ sensor (although it is becoming harder to find 3 megapixel cameras nowadays, most will be leftover stock using an older, noisier sensor manufacturing process).
  • Or save up for the $1000 or so that entry-level large-sensor DSLRs like the Canon EOS-300D or Nikon D70 will cost. The DSLRs will yield much better pictures including low-light situations at ISO 800.
  • Film is your only option today for decent low-light performance in a compact camera. Fuji Neopan 1600 in an Olympus Stylus Epic or a Contax T3 will allow you to take shots in available light without a flash, and spare you the “red-eyed deer caught in headlights” look most on-camera flashes yield.

Conclusion

Hopefully, as the technology matures, large sensors will migrate into the midrange and make it worthwhile. I for one would love to see a digital Contax T3 with a fast prime lens and a low-noise APS-size sensor. Until then, there is no point in getting anything in between – midrange digicams do not offer better image quality than the cheaper models, while at the same time being significantly costlier, bulkier and more complex to use. In fact, the megapixel rat race and the wide-ranging but slow zoom lenses that find their way on these cameras actually degrade their image quality over their cheaper brethren. Sometimes, more is less.

Updates

Update (2005-09-08):

It seems Sony has finally seen the light and is including a large sensor in the DSC-R1, the successor to the DSC-F828. Hopefully, this is the beginning of a trend.

Update (2006-07-25):

Large-sensor pocket digicams haven’t arrived yet, but if you want a compact camera that can take acceptable photos in relatively low-light situations, there is currently only one game in town, the Fuji F30, which actually has decent performance up to ISO 800. That is in large part because Fuji uses a 1/1.7″ sensor, instead of the nasty 1/2.5″ sensors that are now the rule.

Update (2007-03-22):

The Fuji F30 has been superseded since by the mostly identical F31fd and now theF40fd. I doubt the F40fd will match the F30/F31fd in high-ISO performance because it has two million unnecessary pixels crammed in the sensor, and indeed the maximum ISO rating was lowered, so the F31fd is probably the way to go, even though the F40 uses standard SD cards instead of the incredibly annoying proprietary Olympus-Fuji xD format.

Sigma has announced the DP-1, a compact camera with an APS-C size sensor and a fixed 28mm (equivalent) f/4 lens (wider and slower than I would like, but since it is a fixed focal lens, it should be sharper and have less distortion than a zoom). This is the first (relatively) compact digital camera with a decent sensor, which is also a true three-color Foveon sensor as cherry on the icing. I lost my Fuji F30 in a taxi, and this will be its replacement.

Update (2010-01-12):

We are now facing an embarrassment of riches.

  • Sigma built on the DP1 with the excellent DP2, a camera with superlative optics and sensor (albeit limited in high-ISO situations, but not worse than film) but hamstrung by excruciatingly slow autofocus and generally not very responsive. In other words, best used for static subjects.
  • Panasonic and Olympus were unable to make a significant dent in the Canon-Nikon duopoly in digital SLRs with their Four-Thirds system (with one third less surface than an APS-C sensor, they really should be called “Two-Thirds”). After that false start, they redesigned the system to eliminate the clearance required for a SLR mirror, leading to the Micro Four Thirds system. Olympus launched the retro-styled E-P1, followed by the E-P2, and Panasonic struck gold with its GF1, accompanied by a stellar 20mm f/1.7lens (equivalent to 40mm f/1.7 in 35mm terms).
  • A resurgent Leica introduced the X1, the first pocket digicam with an APS-C sized sensor, essentially the same Sony sensor used in the Nikon D300. Extremely pricey, as usual with Leica. The relatively slow f/2.8 aperture means the advantage from its superior sensor compared to the Panasonic GF1 is negated by the GF1’s faster lens. The GF1 also has faster AF.
  • Ricoh introduced its curious interchangeable-camera camera, the GXR, one option being the A12 APS-C module with a 50mm f/2.5 equivalent lens. Unfortunately, it is not pocketable

According to Thom Hogan, Micro Four Thirds grabbed in a few months 11.5% of the market for interchangeable-lens cameras in Japan, something Pentax, Samsung and Sony have not managed despite years of trying. It’s probably just a matter of time before Canon and Nikon join the fray, after too long turning a deaf ear to the chorus of photographers like myself demanding a high-quality compact camera. As for myself, I have already voted with my feet, successively getting a Sigma DP1, Sigma DP2 and now a Panasonic GF1 with the 20mm f/1.7 pancake lens.

Update (2010-08-21):

I managed to score a Leica X1 last week from Camera West in Walnut Creek. Supplies are scarce and they usually cannot be found for love or money—many unscrupulous merchants are selling their limited stock on Amazon or eBay, at ridiculous (25%) markups over MSRP.

So far, I like it. It may not appear much smaller than the GF1 on paper, but in practice those few millimeters make a world of difference. The GF1 is a briefcase camera, not really a pocketable one, and I was subconsciously leaving at home most of the time. The X1 fits easily in any jacket pocket. It is also significantly lighter.

High ISO performance is significantly better than the GF1 – 1 to 1.5 stops. The lens is better than reported in technical reviews like DPReview’s—it exhibits curvature of field, which penalizes it in MTF tests.

The weak point in the X1 is its relatively mediocre AF performance. The GF1 uses a special sensor that reads out at 60fps, vs. 30fps for most conventional sensors (and probably even less for the Sony APS-C sensor used in the X1, possibly the same as in the Nikon D300). This doubles the AF speed of its contrast-detection algorithm over its competitors. Fuji recently introduced a special sensor that features on-chip phase-detection AF (the same kind used in DSLRs), let’s hope the technology spreads to other manufacturers.