Clinical Guidelines prev next

Mar 16 2007

Re-Defining Ocular Hypertension

posted by: Bud O'Leary, OD

I have grappled with the definition of ocular hypertension for years. Most of us simply use our infamous number, 21 mm Hg., as our threshold for diagnosis. However, the mathematical model that assigned one standard deviation above a mean (IOP>21mm) as ocular hypertension may be too simplistic.

We are all aware that operator and patient induced variables can create fluctuations in IOP measurements. Circadian and diurnal cycles also contribute a range of values that may influence our diagnosis. Additionally, equipment calibration can affect accuracy of IOP measurements.

A recent article in the Journal of Glaucoma, Volume 15(6), December 2006, “Inconsistency of the Published Definition of Ocular Hypertension,” supports the need for a better definition of ocular hypertension.

Ivan Tavares, M.D., Felipe Medeiros, M.D., and Robert Weinreb, M.D. conducted a literature review of 133 studies published between January 1995 and July 2005 to determine the criteria used to define ocular hypertensive. The goal was to determine the influence of the OHTS results on the definition of ocular hypertension used in the literature.

In addition to IOP, their review included CCT, visual field analysis and optic disc assessment as criteria to contribute to the definition of ocular hypertension. The study concluded that there are no uniform criteria for the definition of ocular hypertension. As stated, “the wide variation in criteria used to define OH suggests the important need for a standardized definition.”

Defining ocular hypertension should be a dynamic multi-variate process. Utilizing a single, fixed IOP threshold alone may ignore components that could influence the diagnosis, delay treatment, or adversely affect prognosis.