Dots per Inch


About

Meetings

Articles

Links

Contact

Join

Forums

After "storage MB," one of the most commonly misunderstood computer specifications is dots per inch (dpi). Scanners, printers, and monitors all refer to some measurement that sounds like dots per inch. When you take your job to a printing company for reproduction, they may make reference to yet another, not wholly equivalent, number called lines per inch (lpi).

These concerns don't apply if you are working with a text-only document but are important when you are dealing with a scanned or computer generated picture. Using the correct resolution results in the quality you need without creating an unnecessarily large file. Knowing what the correct resolution should be depends on the original and the ultimate use.

You need to be concerned with the dpi or lpi because, except for a pen plotter, there is no such object as a continuous line; but every image is composed of dots of ink or toner placed close enough together that the eye is convinced it's seeing an unbroken line. Color is even more complicated because the devices cannot create a true orange, purple, or any other shade. Each of these intermediate colors is simulated by placing dots of three primary colors almost on top of each other. This smallest element of a picture is called a pixel but what makes up a pixel varies depending on the type of picture.

When you transfer an image from a scanner or monitor to ink on paper, there are two more considerations. First, because there is no gray ink in the printer, printed gray shades are an illusion of black dots and white spaces. Secondly, the primary colors used by a monitor are not the same as ink primaries. For a desktop user, most programs automatically take care of these inconsistencies without you having to make a conscious intervention. We deal with these issues in more detail later.

Let's look at how each type of image impacts computer resources.

A black-and-white image such as a picture of a page of text or an ink drawing that has no shades is called a line scan. Each pixel can be either solid black or solid white and is represented by one bit in the program: 1 or 0. At 150 dpi, one square inch contains 22,500 pixels (150 x 150) and occupies approximately 2.8KB.

A photograph or other picture that has shades of gray is called a grayscale. Generally the computer will assign 256 levels between pure white and black making each pixel occupy 8 bits or one byte. Now our same square inch is taking up 23KB of space on the disc. If it were full color, each pixel would consist of a shade of a red, green, and blue dot and the square inch now requires about 69KB.

If you go to a higher resolution remember that the number of pixels increases by the square of the change in resolution. Doubling the dpi to 300 quadruples the disc space to 11.2KB, 90KB, and 270KB respectively. (These numbers may not exactly match real life because of file overhead or compression but they give you an idea of the scale you're up against.)

You may think more is better, but the file size is indicative of the amount of information the picture contains and sometimes you neither need nor want that much detail.

If the end use of your image is a screen display such as a Web page, you can get away with a very low resolution. Traditionally monitor images are saved at 72 dpi but that dates from before 20" screens running at 1280x1024; you may have to experiment to get the size image you desire. You can also get acceptable results with 16- or even 8-bit color (also known as 64K or 256 color mode) rather than the scanner's 24-bit (8-bits per primary or 16.4 million) color. Saving the file as a .GIF usually gives a decent image in a minimum of space.

If your goal is a picture on paper, the first consideration is what type of picture you have. With a black and white line drawing, the dpi solution is simple; scan at the highest level you can afford up to the rating of your printer. If your picture contains gray tones or color, you need to know some more about lines per inch.

This is a term from printing that refers to how a photograph of tones is produced from black ink. Look at a newspaper photo with a magnifier and you will see that it is made of various sized dots. Traditionally, these dots are on a regular rectangular grid with a spacing known as the screen lpi. How much of the dot covers its grid space determines how dark the gray appears.

Newspapers generally use a coarse screen of 85 or even 65 lpi and ordinary commercial printing on office papers is 120 or 133 lpi. Color magazines on slick paper are 150 or 175 lpi while a fine art book may go as high as 300 lpi. Higher resolutions require better materials, equipment, and control resulting in higher costs and slower turnaround.

Desktop printers use the same type of screening techniques to represent shades of gray, but are restricted to a much lower lpi than a printing press. You should get satisfactory results using 80 lpi with a 600 dpi laser printer and up to 133 dpi if your laser can hit 1200 dpi. Realistically, you may not even have a choice of lpi unless you are using a professional graphics program. If you are using a commercial printing company, you should let them output your file directly to film at 2400 dpi or more. They can then use the settings that best match your original to the paper you have chosen and the capabilities of their equipment.

You may think that if you need a 1200 dpi printer to get a 133 lpi picture, the scan also needs to be that high. But remember, you're scanning 8 or even 24 bits of information for each pixel while your printer has only 1 bit for each dot. The rule of thumb is that you need to scan at 1.25 to 1.75 times your final lpi. Thus, while a signature (black and white) output to a 600 dpi printer needs to be scanned at 600 dpi; a color photo only needs a 100 to 140 dpi scan.

By now, you're wondering why the salesman made such a deal about his 1200 dpi scanner if all you need is 100 dpi to make a photo album. This entire discussion is premised on printing at the same size as the original. If you enlarge the picture, the dots of your scan become larger.

Let's take the example of a photo that you want a 150 dpi scan at double the original size - 200%. If you choose to enlarge it as you scan the, scanner will use 300 dpi of its capability and tell the file the image is 150 dpi but twice as big. If you enlarge the photo in an application you need to select scanning at 300 dpi and 100%. When it's finally output, the program will make each pixel twice as large. Either technique results in comparable quality and file size.

Similarly, if you are reducing your picture you can scan at less dpi the your ultimate needs. If you make the choice manually, there is a linear relation between the enlargement or reduction and the necessary dpi.

What is dpi and how much do you really need? For once, the answer may not be "the best technology available." A fax machine is 200 dpi and most OCR programs are optimized to work with a fax image. If you're working with snapshots for a home printer or Web use, 300 dpi color has you covered. If you need a signature or seal for stationery or a line drawing for an illustration, 600 dpi is acceptable for all but the finest lines. While your scanner may have a higher capability, you should only use what you need as a higher dpi rapidly increases the file size and slows down all the processing steps. As in most endeavors, proper use of available resources gives a better return than buying more expensive technology.


Home | About | Meetings | Links | Contact | Join | Forums

Wellington Macintosh Society Inc. 2002