IMMAGE PROCESSING AVI MEDICAL IMAGE CONTROL
ABSTRACT
The
various Medical images acquired directly from various instruments are in the AVI format, which reduces the easy control of image
display without conversion to medical image standard, that is the DICOM format.
The purpose of this project is to develop software to handle online data
acquisition from medical equipments like Ultra Sound machine, control the display
rate, convert the AVI image
acquired from the Medical equipment directly to DICOM image with patient’s
detail’s got from the user, freeze the AVI
image frame of interest, convert the freezed AVI
frame to Bitmap image, convert this Bitmap image to DICOM image with patient’s
details. This software is highly reliable, efficiently handles memory and very
user friendly.
Medical
equipments like Ultra Sound,
CT etc… have images at their
output in the AVI file format,
which are acquired with the respective probes. These AVI
images acquired are stored. The software captures this AVI
image, displays them frame-by-frame in succession and converts them to DICOM
image with required patient’s details obtained from the Specialist during
conversion. The frame of interest can be freezed and converted to Bitmap image,
which can also be viewed on a separate window with options to brighten, darken,
change the color combination, invert the image and restore the image. The converted DICOM image can be viewed on
any Standard DICOM viewer. Mostly all the DICOM viewer will have provision to
view the patient’s details entered during conversion.
OBJECTIVE
To help the doctor view a
particular frame of interest captured from a medical equipment which is usually
an AVI image and to enable the
doctor to manipulate the frame for correct diagnosis and provide efficient treatment.
MEDICAL IMAGING
From
Ophthalmology and radiology to orthodontics, image processing touches the
medical field in many ways. The ability to visualize and interactively
manipulate three-dimensional objects derived from sets of two-dimensional MRI and CAT
scan (now shortened to CT scan) slices has changed the way we deal with
medicine. MRI stands for nuclear
magnetic resonance imaging.
DISADVANTAGE OF EXISTING SYSTEM
There is no AVI
viewer that facilitates the doctors to manipulate the medical image captured
from the equipment. All AVI
viewers available just displays the frames in predetermined time intervals and
time of display of each frame cannot be controls as per the physicians
requirement. Frame at a particular given time can be displayed but, it wont
help the doctor capture the exact frame that is required to find out the exact
defect.
PROPOSED SYSTEM
This system will prove to be
user friendly as this captures the medical AVI
image, grabs the required header information, converts them to DICOM file
format and stores it along with the patient’s details, physicians details, etc…
so that any physician can diagnose the patient without any other further
details. Moreover there are many DICOM viewer available with many image
processing provision.
Steps To Control Image
Ø
Capture the image from
an medical equipment which will normally be in AVI
(Audio/Video Interleaved) format.
Ø
Analyze the header
details of the AVI image.
Ø
Copy the required
header details into the DICOM header format.
Ø
If the length of the
header is greater than zero it is considered to be valid.
Ø
Find the start of
frame in the AVI file, check for
its length, if data is valid copy the frame into DICOM file else skip the
frame.
Ø
View the DICOM file in
appropriate DICOM viewer.
IMAGE FILTERING
It
is used to extract great amounts of information from our images – information
to which we don’t have access normally
Ø Edge
enhancement and sharpening filters will bring out details in
objects that we would not otherwise have noticed.
Ø
Averaging
filters will smoothen the rough and jagged edges in our images, making them
more appealing to the eye.
Ø
Basic
Statistical filter will remove much of the noise found in our CCD scanned
images.
Ø
Gradient
analysis will help us visualize your image in a whole new light, greatly
enhancing edges – allowing us to create interesting embossed image effects.
Ø
Special
filters can help us identify certain objects within an image.
Ø
Low Pass filter passes on lower frequency
components of an image, while attenuating or rejecting the higher frequency
components.
Ø
High Pass
Filter is used to amplify the high-frequency details found in an image,
while the integrity of low-frequency detail of the image remains.
IMAGE PROCESSING:
Images are a vital and integral part
of every day life. On an individual, or person-to-person basis, images are used
to reason, interpret, illustrate, represent, memorize, educate, communicate,
evaluate, navigate, survey, entertain, etc. We do this continuously and almost
entirely without conscious effort. As man builds machines to facilitate his
ever more complex lifestyle, the only reason for NOT providing them with the
ability to exploit or transparently convey such images is a weakness of
available technology.
Applied Image Processing, in its
broadest and most literal interpretation, aims to address the goal of providing
practical, reliable and affordable means to allow machines to cope with images
while assisting man in his general endeavors.
By
contrast, the term ‘image processing’ itself has become firmly associated with
the much more limited objective of
modifying images such that they are either:
a.
Corrected for errors introduced during acquisition or
transmission (‘restoration’); or
b.
Enhanced to overcome the weakness of human visual
system (‘enhancement’)
As
such, the discipline of ‘pure’ image processing may be succinctly summarized as
being concerned with
‘ a
process which takes an image input and generates a modified image output ’
Clearly
then, other disciplines must be allied to pure image processing in order to
allow the stated goal to be achieved. ‘Pattern classification’, which may be
defined simply as
‘ a
process which takes a feature vector input and generates a class number output’
Confers
the ability to identify or recognize objects and perform sorting and some
inspection tasks. ‘Artificial intelligence’, which may be defined as
‘ a
process which takes primitive data input and generates a description, or
understanding or a behavior as an output’
Confers a wide range of capability
from description, in the form of simple measurement of parameters for inspection
purpose, to a form of autonomy borne out of an ability to interpret the world
through a visual sense.
Theses
disciplines have been evolving steadily and independently ever since computer
first became available, but only when they are all effectively harnessed
together do machines acquire anything like the ability to exploit images in the
way that humans do.
In
particular, the marriage of one, or both, of the first two disciplines with
artificial intelligence has given birth to the new, image specific disciplines,
namely ‘image analysis’, ‘scene analysis’ and ‘image understanding’.
Image analysis is normally satisfied
with quantifying data about objects which are known to exist within a scene, or
determining their orientation, or recognizing them as one of a limited set of
possible prototypes. As such it is largely concerned with the development of
the 2-D applications, there is an undoubted need to extend this activity to the
description of 3-D relationships between objects within a 2-D view of the
real-world scene.
Scene analysis was the original term
coined to describe this extension of image analysis into the third dimension.
Such work flourished in the 1960s and was concerned with the rigorous visual
analysis of three-dimensional polyhedra (the so-called ‘blocks-world’), on the
mistaken premise that it would be a trivial matter to extend these concepts to
the analysis of natural scenes. The work was finally abandoned in the late
1970s when it was realized that the exploitation of application-dependent
constraints was no way to research general-purpose vision systems.
Consequently,
the term scene analysis fell into disuse only to be replaced by that of image
understanding, which is more fundamentally based upon the physics of image
formation and the operation of human visual system. It aims to allow machines
to operate with ease in complex natural environments, which feature partially
occluded objects or, ultimately, previously unseen objects.
A
broad overview of the literature in the field of machine perception of images
suggests the existence of two distinct ‘camps’ whose followers, while sharing
common roots, set out to achieve fundamentally different objectives. We have
chosen to label these camps as ‘computer vision’ and ‘machine vision’, and feel
that they are essentially distinguished by their different approaches to the
use of artificial intelligence and the degree to which it is employed. (‘Robot
vision’ was also a popular alternative at one time, although it appears to be
slowly falling into disuse, perhaps because of rather unfortunate
science-fiction connotations.)
‘Computer vision’ is ultimately
concerned with the goal of enabling machines to understand the world that they
see, in real-time and without any form of human assistance. Thus,
application-specific constraints are rejected wherever possible as the world is
‘interpreted on-line’. The complexity of this task is easily under-estimated by
those who take human vision for granted, but it is fraught with many immensely
difficult problems, and seriously hampered by inadequate processing power.
‘Machine vision’ on the other hand, is
concerned with utilizing existing technology in the most effective way to endow
a degree of autonomy in specific applications. The universal nature of the
computer vision approach is sacrificed by deliberately exploiting
application-specific constraints. Thus knowledge about the world is
‘pre-complied’, or engineered, into machine vision applications in order to
provide cost-effective solutions to real-world problems.
DIGITAL IMAGE ACQUISITION:
The
general goal for image acquisition and processing is to bring pictures into the
computer domain of the computer, where they can be displayed and then
manipulated and altered for enhancement. Four processes are involved in image
acquisition:
Ø
Input
Ø
Display
Ø
Manipulation
Ø
Output
‘The
transformation of optical image ata into an array of numerical data which may
be manipulation by a computer, so overall aim of machine vision may be
achieved’
In
order to achieve this aim three major issues must be tackled they are:
Ø
Representation
Ø
Transduction (or sensing)
Ø
Digitizing
ARITHMETIC OPERATIONS ON IMAGES:
The
arithmetic operations are absolutely essential for calibration and flattening
of the image in certain applications, particularly in those applications that
have a low signal. They are helpful tools for enhancing an image. The basic
arithmetic operations on images are:
Ø
Addition
Ø
Subtraction
Ø
Multiplication
Ø
Division
GEOMETRIC TRANSFORMATIONS:
Many
times, to combine images taken at different times or by different sources, we
have to translate, rescale, and rotate the images. It is usually important that
the images match spatially. Without proper registration of images before
combination passes, most techniques for image enhancement will actually degrade
the images, losing important or interesting information. The basic geometric
transformations are:
Ø
Translation
Ø
Scaling/Zooming
Ø
Resampling
Ø
Rotation
Ø
Flipping
ADVANCED GEOMETRIC TRANSFORMATIONS:
Have you ever wondered how those
interesting special effects that you see in movies and commercials were made?
How in an image can one person transform into another person or even an animal
or another entity? The two advanced geometric transformations are:
Ø
Warping
Ø
Morphing
Warping is a digital technique of
distorting an image hence also called geometric distortion. It has been used to
create sophisticated special effects I movies and television shows and in
recent times in a plethora of television commercials. They all use exotic
computers and custom software.
Morphing
is an extension of warping and it is the complete and smooth transformation
from one image to another. This technology, which traditionally has been
prohibitively expensive, with a little effort, can now be done on the desktop
computer very cheaply. Essentially, morphing involves two steps of warping,
with a spline interpolation between the initial images and the resultant image.
Morphing has you match key features such as the eyes, nose, mouth and other details
on both the exact same graphic space. Finally, a weighted average is made of
each step of transformation of the two wraps. For instance, to morph a truck
into a train, the train is first warped into the same shape as the truck so
that certain specific points, the windshields, headlights and grills match as
closely as possible.
IMAGE PREPOCESSING:
Image
preprocessing seeks to modify and prepare the pixel values of a digitized image
to produce a form that is more suitable for subsequent operations within the
generic model. There are two major branches of image preprocessing, namely
Ø
Image Enhancements
Ø
Image Restoration
Image
enhancement attempts to improve the quality of image or to emphasize particular
aspects within the image. Such an objective usually implies a degree of a
degree of subjective judgment about the resulting quality and will depend on
the operation and the application in question. The results may produce an
image, which is quite different from the original, and some aspects may have to
be deliberately sacrificed in order to improve others.
The
aim of image restoration is to recover the original image after ‘known’ effects
such as geometric distortion within a camera system have degraded it or blur
caused by poor optics or movement. In all cases a mathematical or statistical
model of the degradation is required so that restorative action can be taken.
Both
types of operation take the acquired image array as input and produce a
modified image array as output, and they are thus representative of pure ‘image processing’. Many of the
common images processing operations are essentially concerned with the
application of linear filtering to the original image ‘signal’.
REFERENCE:
Ø Anil
K. Jain (1989) ‘Fundamentals of Digital Image Processing’, Prentice-Hall, Englewood Cliffs, N.J.
Ø Awcock
G.W. & Thomas R. (1996) ‘Applied Image Processing’.
Ø Sid
Ahmed (1995) ‘Image Processing’.
Ø William
K. Pratt (1978) ‘Digital Image Processing’.
Ø Christopher
Watkins, Alberto Sadun, Stephen Marenka ‘Mordern Image Processing’.
Ø Maher
A. Sid-Ahmed ‘Image Processing’.
Ø G.W.Awcock,
R. Thomas ‘Applied Image Processing’.
0 comments:
Post a Comment