Assistive Technology for Kids with Learning Disabilities

Assistive Technology

Assistive Technology for Kids with Learning Disabilities An Overview Assistive technology (AT) is accessible to assist people with numerous kinds of disabilities— from intellectual issues to actual disability. This article will focus explicitly on AT for people with learning disabilities (LD). The utilization of technology to improve learning is a successful methodology for some youngsters. … Read more

Hover car | future of flying car | Amazing Technology

Flying Car

Hover Car (Flying Car) A flying car is a kind of on the brink of a personal air vehicle or roadable airplane that gives entryway to entryway transportation by both ground and air. The expression “flying car” is additionally occasionally used to incorporate hovercars.  Numerous models have been worked since the mid-twentieth century, utilizing an … Read more

Virtual Reality Technology | An overview

Virtual Reality Technology

Virtual Reality Technology Introduction For technology lovers, it’s an exciting time to be alive. Per annum sees massive progress within the technological world, gaping up new opportunities and new experiences.  Two of the leading technological fields that are perpetually innovating and progressing — not simply the approach we have a tendency to live our lives, … Read more

Iris Scanning Technology

What is Iris Scanning Technology

Iris Scanning Technology Introduction Iris scanning technology is the way toward utilizing noticeable and close infrared light to take a high-contrast photo of an individual’s iris. It is a type of biometric technology in a similar class as face recognition and fingerprinting. Advocates of iris scanning technology guarantee it permits police officers to contrast iris … Read more

What is Holographic

Holographic data storage

Holographic data storage  Moving towards advancements come up with multiple challenges to secure data and have better data storage options.  We are going towards the Meta data files that are huge and important to be secure. When every business is shifting online and having their data storage, then they require having better storage space.  It … Read more

ATM Eye Retina Technology

Eye Retina

ATM Eye Retina Technology Introduction Developed in the 1980s, retinal scanning is one of the most renowned biometric technology, but it’s also one of the least deployed. A retinal scan is a biometric technology that uses remarkable patterns on an individual’s retina veins. Retinal identification is a programmed technology that gives genuine identification of the … Read more

Elon Musk

elon musk

Who is Elon Musk Musk is the co-founder, CEO, and product architect at Tesla Motors, an organization shaped in 2003 that is dedicated to producing reasonably priced mass-market electric cars. In addition to battery products and solar roofs. Musk oversees all product development, engineering, and design of the corporate’s merchandise. Elon Musk co-founded the digital … Read more

Electric Vehicle Technology

Electric Car Technology

Electric car technology The technology behind the new generation of electric cars can be somewhat baffling but the details are worth knowing and understanding so that you can drive with confidence knowing if the decision you made is right or not. How it works Essentially, an EV (electric vehicle) is equipped with a battery instead … Read more

Artificial Intelligence


What is Artificial Intelligence?

There is a lot of buzz around artificial intelligence at the moment and the term AI seems to be thrown around a lot but what is it exactly to clear things up. Before starting, we have to look at a definition. 

To avoid confusion, we have to go back to the earliest and hence purest definition of AI from the time it was first coined. The official idea and definition of AI was first coined by Jay McCartney in 1955 at the Dartmouth conference. 

Of course, those plenty of research work done on AI by others such as Alan Turing before this but what they were working on was an undefined field before 1955.

McCarthy proposed every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. 

An attempt will be made to find out how to make machines use language from abstractions and concepts, solve the kind of problems now reserved for humans, and improve themselves. 

AI is the machine with the ability to solve problems that are usually done by us with our natural intelligence. A computer would demonstrate a form of intelligence when it learns how to improve itself at solving these problems. 

To elaborate further, the 1955 proposal defines seven areas of AI. Today there are surely more here are described the original seven. 


Areas of AI:

  1. Simulating higher functions of the human brain.
  2. Programming a computer to use general language.
  3. Arranging hypothetical neurons in a manner, enabling them to form concepts.
  4. A way to determine and measure problem complexity.
  5. Self-improvement.
  6. Abstraction: defined as the quality of dealing with ideas rather than events
  7. Randomness and creativity.

After 60 years, it is thought that realistically we have completed the language measure problem, complexity and self-improvement to at least some degree. However, randomness and creativity is just starting to be explored. 

Recently, we have seen a couple of web episodes, scripts, short films, and even a feature-length film co-written or completely written by AI. So, in the definition, you have heard the word intelligence. What is the intelligence? Well, according to Jack Copeland who has written several books on AI.

Factors of Artificial Intelligence:

Some of the most important factors of intelligence are 

  1. Generalization:  Learning that is learning that enables the learner to be able to perform better in the situations not previously encountered.
  2. Reasoning: To reason is to draw conclusions appropriate to the situations in hand. 
  3. Problem solving: Given such and such data and find x.
  4. Perception: Analyzing, ask and environment and analyzing features and relationships between objects. For example, self-driving cars. 
  5. Language understanding: Understanding language by following syntax and other rules similar to a human. 

Strong AI vs weak AI:

Okay, so now we have an understanding of AI and intelligence. To bring it together a bit and solidify the concept in your mind of what AI is, here are a few examples AI machine learning, computer vision, natural language processing, robotics, pattern recognition and knowledge management. There are also different types of artificial intelligence in terms of approach. For example, the strong AI and weak AI.

Strong AI is stimulating the human brain by building systems that think and in the process give us an insight into how the brain works. We are nowhere near the stage yet. Weak AI is a system that behaves like a human but doesn’t give us insight into how the brain works. 

IBM’s deep blue a chess playing AI was an example. It processed millions of moves before it made any actual moves on the chessboard. It doesn’t stop there though, there’s actually a new kind of middle ground between the strong and weak AI. 

This is where a system is inspired by human reasoning but doesn’t have to stick to it. IBM’s Watson there’s an exam like human it reads lot of information, recognize patterns, and builds up evidences to say hey I am X% confident that this is a right solution to the question that you have asked me from the information that I have read. 

Google’s deep learning is similar as it mimics the structure of human brain by using neural networks but doesn’t follow its function exactly. The system uses nodes that act as artificial neurons connecting information. Going a little bit deeper, neural networks are actually a subset of machine learning. 

So what is a machine learning then? Machine learning refers to algorithms that enable software to improve its performance over time as it obtains more data. This is programming by input-output examples rather than just coding. So that this makes more sense? 

Let us use an example, a programmer would have no idea how to program a computer to recognize a dog, but he can create a program with a form of intelligence that can learn to do so. If he gives a program enough image data in the form of dogs and let it process and learn. 

When you give the program, an image of a new dog that’s it never seen before, it would be able to tell that it’s a dog with relative ease. 

Most artificial intelligence algorithms are expert systems. So what are expert systems? The often cited definition of an expert system is as follows:

“An expert system is a system that employs human knowledge in a computer to solve problems that ordinarily human expertise”. 

Basically it’s a practical application of knowledge database. Demis Hassabis was a co-creator of D mind, highlighted in a Google blog that we have mastered go and thus achieved one of the grand challenges of AI.

 However, the most significant aspect of all of this for us is that there are not just expert systems that are built on handcrafted rules, instead it uses general machine learning techniques to figure out for itself. 

Our hope is that one day; they could be extended to help us address some of the society’s toughest and most pressing problems from climate modeling to complex disease analysis. 

In other words, the algorithms that are used to serve as a basis to be applied to very complex problems. 

Applications of artificial intelligence:

A lot of us are paranoid about how artificial intelligence might negatively impact our lives. However, the present picture is thankfully more positive. So, let’s explore how artificial intelligence is helping our planet and at last benefiting humankind.

  1. Artificial intelligence in artificial creativity:

Have you ever wondered what would happen if an artificially intelligent machine tried to create music and art? There is a system which is called muse net which creates music by an artificial intelligence system. 

A MuseNet is a deep neural network that can generate 4 minute musical compositions with 10 different instruments and can combine styles from beetles band. 

MuseNet was not exclusively programmed with an understanding of music but instead, it discovers patterns with harmony, rhythm and style learning on its own. 

Another creative product of artificial intelligence is a content automation tool called wordsmith. Wordsmith is a natural language generation platform that can transform your data into an insightful narrative. Tech giants such as Yahoo, Microsoft are using wordsmith to generate 1.5 billion pieces of content every day.

  1. AI in social media:

Ever since social media has become our identity, we have been generating an unmeasurable amount of data through chats, tweets, posts and so on. Whenever there is an abundance of data, AI and machine learning are always involved. 

In social media platforms like Facebook, AI is used for face verification. Whereas, machine learning and deep learning concepts are used to detect facial features and tag your friends. Deep learning is used to extract every minute detail from an image by using a bunch of deep neural networks. 

Machine learning algorithms are used to design news feed on your interest. Another such examples is Twitter’s AI which have been used to identify hate speeches and terroristic languages in tweets. In makes use of machine learning, deep learning and natural language processing to filter out offensive content. 

According to a recent survey, company has discovered almost 300,000 terrorist accounts 95% of which are found by non-human artificially intelligent machines.

  1. AI in Chatbots:

In these days, virtual assistance have become a very important technology. Almost every household has a virtually assistant that controls the home. A few examples include Siri which is gaining popularity because of the user experience they provide. Amazon Alexa is an example of how AI can be used to translate human language into desirable actions. 

This device uses speech recognition and natural language processing to provide a wide range of tasks on your command. 

It can do more than just to play your favorite songs, it can be used to control the devices at your home, book cabs for you, make phone calls, order your favorite food, check the weather conditions and so on. 

Another example is a google virtual assistant called google duplex that have astonished millions of people. Not only it responds to call and book appointments for you, it adds a human touch. It uses natural language processing and machine learning algorithms to process human language and perform tasks such as manager’s schedule, control your smart home, make reservations and so on.

  1. AI in autonomous vehicles:

For the longer period of time, self-driving cars has been great area of interest in AI industry. The development of autonomous vehicles definitely revoluationalize transportation system. Companies like Waymo conducted several test drives before deploying public ride car services. The artificial intelligence system collects data from the vehicles radar, cameras, GPS and cloud services to produce control signals that operate the vehicle.

 Advanced deep learning algorithms can accurately predict what objects in the vehicles vicinity are likely to do. This makes Waymo cars much more effective and safer.

Another important example of autonomous vehicle is a Tesla’s self-driving cars. AI implements computer vision in which detection and deep learning to build cats that can automatically detect objects and drive around without human intervention. Elion Mask, the founder of Tesla, talks a ton about how AI is implemented in Tesla’s self-driving cars and auto-pilot features.

 He quoted that Tesla will have fully self-driving cars ready by the end of the year and a robot taxi version, that can carry passengers without anyone behind the wheel. 

Tesla’s auto pilot software was beyond driving the car you tell it to go. If you are not in the mood for talking, auto pilot will check your calendar and drop you to your scheduled appointment. This sound pretty amazing!

  1. AI in space exploration:

We have applications of AI in space explorations. This is one of the most interesting fields in which AI is being implemented. Space exploration and discoveries always require analyzing large amount of data. AI and machine learning is the best way to handle and process data. 

After years of research, astronomers use AI to go through years of data obtained by the Hubble telescope in order to identify distant planets and solar systems. This was accomplished by using AI technology.

 AI is also being used in NASA’s rover machine to Mars which was the Mars 2020 rover. AEGIS, which is the NASA’s current Mars rover is already on the red planet. The rover is responsible for autonomous targeting of cameras in order to perform investigation on Mars. This proves that how far has AI reached.

AI Examples

Recent AI inventions:

The current decade has been immensely important for AI inventions. In recent years, AI has become embedded in our day-to-day existence. We use smartphones which have voice assistance and computers that have intelligence functions. AI is no longer a pipe dream. 

In 2010, imagenet launched the imagenet large scale visual recognition challenge also Microsoft launched Kinect for Xbox 360, the first gaming device that tracks human body movement using a 3D camera and infrared detection. 

In 2011, Apple released Siri, a virtual assistant on iOS operating system. Siri depends on natural language user interface to infer, observe, answer and command things to its human user. It adopts the voice commands, projects and individualized experiences for the users. 

Then in 2012, two Google researchers trained a large neural network of 16000 processors to recognize of cats by showing it 10 million unlabeled images from Youtube videos.

In 2014, Microsoft released Cortana, their version of virtual assistance similar to Siri on iOS. Also Amazon created Amazon Alexa, a home assistant that developed into smart speakers that function as personal assistants. 

Talking about the years between 2015-2017, Google deep minds alphago, a computer program that plays the board game co defeated various human champions.

Not just that in 2016, a humanoid robot named Sophia was created by Hansen robotics. She’s the first robot citizen with her ability to see, make facial expressions and communicate. Through AI Sophia has more human like features when compared to other humanoids.

 Finally in 2018, first bi-directional, unsupervised language representation was developed that can be used in a variety of natural language tasks using transfer learning and also Samsung introduced Bigsby of virtual assistant. Its function includes voice, where the user can speak to and ask questions, recommendations and suggestions.

Limitations of AI:

AI can beat humans at board games such as checkers, chess and go but it doesn’t understand the concept of a game or even the purpose of playing games. Furthermore, an AI which has become a champion of chess could not be used to learn how to play a musical instrument. We will have to need a separate and completely different architecture for that task. 

Narrow AI can make recommendations for movies and purchases on the basis of what you have watched or purchased in the past but it has no idea what it means to watch a movie or buy something. 

It can’t even explain or articulate why it makes recommendations it does. Narrow AI gather information around the globe for use in making weather predictions but it has no idea what weather actually is. 

It has no real world experience. In short, narrow AI works within a very limited context and cant solve problems outside of its training. 

For example, an AI that is designed to detect abnormalities in x-rays cant in turn around and drive you home after work. Even applications like self-driving cars often use a whole range of machine learning. Algorithms to complete that task. 

Narrow AI is very good at processing mountains of data and finding patterns and correlations but it lacks any real common-sense understanding of what that data means or predictions it makes. AI systems often use an approach to processing data and solving problems. Its very different to the way human intelligence does.

For example, if two images are introduced with a little bit of voice inside it. For humans, it will be a no problem for identifying image before and after noise but for narrow AI it will require a complex system.

Robotic Surgery

Robotic surgery

-What is Robotic Surgery


Robotic surgery is the latest evolution of minimally invasive surgical procedures, During surgery, three or four robotic arms are inserted into the patient through small incisions in the abdomen, One arm is a camera, two act as the surgeon’s hands and a fourth arm may be used to move obstructions out of the way.

The patients are surrounded by a complete surgical team, while the surgeon is seated at a nearby console, The surgeon uses a viewfinder which offers a three-dimensional image of the surgical field, the surgeon’s hands are placed in special devices that direct the instruments, 

Robotic arms filter out any tremors in the physician’s hands & increase the physician’s range of motion, This enhanced precision is helpful to the surgeon during delicate portions of procedures.

Robotic surgery is ideal for those more complex & difficult to access surgeries, within the specialty of general surgery, its application is widely developed in oncological surgery of the rectum & esophagus-gastric and also, in the other procedures such as surgery of the morbid obesity & the pelvic floor.

Robotic surgery is a new and exciting emerging technology that is taking the surgical profession by storm. Up to this point, however, the race to acquire and incorporate this emerging technology has primarily been driven by the market. 

In addition, surgical robots have become the entry fee for centers wanting to be known for excellence in minimally invasive surgery despite the current lack of practical applications. Therefore, robotic devices seem to have more of a marketing role than a practical role. 

Whether or not robotic devices will grow into a more practical role remains to be seen.

It is an advanced form of minimally invasive or laparoscopic (small incision) surgery where surgeons use a computer-controlled robot to assist them in certain surgical procedures, 

The hands of the robot have a high degree of dexterity, allowing surgeons to operate in very tight spaces in the body that would otherwise only be accessible through open (long incision) surgery.

The idea of robotics used for surgery began more than 50 years ago, but actual use began in the late 1980s with Robodoc (Integrated Surgical Systems, Sacramento, CA), the orthopedic image-guided system developed by Hap Paul, DVM, and William Bargar, MD, for use in prosthetic hip replacement. 

During the time frame of Drs. Paul and Bargar’s development of Robodoc, Brian Davies and John Wickham were developing a urologic robot for prostate surgery. 

In addition, there were a number of computer-assisted systems being used in neurosurgery (called stereotactic) and otolaryngology. 

These were procedure-specific, computer-assisted, and image-guided systems that proved both the potential and value of robotic surgery systems. 

They also heralded the multipurpose teleoperated robotic systems initially developed by SRI International and the Defense Advanced Research Projects Agency (DARPA) and led to the surgeon-controlled (multifunctional) robotic telepresence surgery systems that have become a standard of care. 

The impetus to develop these systems stemmed from the Department of Defense’s need to decrease battlefield casualties, and DARPA was precisely the agency to conduct such high-risk research and development.

Robotic Surgery

Background and history of surgical robots:

Since 1921 when Czech playwright Karel Capek introduced the notion and coined the term robot in his play Rossum’s Universal Robots, robots have taken on increasingly more important both in imagination and reality. 

Robot, taken from the Czech robota, meaning forced labor, has evolved in meaning from dumb machines that perform menial, repetitive tasks to the highly intelligent anthropomorphic robots of popular culture. 

Although today’s robots are still unintelligent machines, great strides have been made in expanding their utility. 

Today robots are used to perform highly specific, highly precise, and dangerous tasks in industry and research previously not possible with a human work force. Robots are routinely used to manufacture microprocessors used in computers, explore the deep sea, and work in hazardous environment to name a few. Robotics, however, has been slow to enter the field of medicine.

The lack of crossover between industrial robotics and medicine, particularly surgery, is at an end. Surgical robots have entered the field in force. Robotic tele surgical machines have already been used to perform transcontinental cholecystectomy. 

Voice-activated robotic arms routinely maneuver endoscopic cameras, and complex master slave robotic systems are currently FDA approved, marketed, and used for a variety of procedures. 

It remains to be seen, however, if history will look on the development of robotic surgery as a profound paradigm shift or as a bump in the road on the way to something even more important.

Paradigm shift or not, the origin of surgical robotics is rooted in the strengths and weaknesses of its predecessors. Minimally invasive surgery began in 1987 with the first laparoscopic cholecystectomy. Since then, the list of procedures performed laparoscopically has grown at a pace consistent with improvements in technology and the technical skill of surgeons. 

The advantages of minimally invasive surgery are very popular among surgeons, patients, and insurance companies. Incisions are smaller, the risk of infection is less, hospital stays are shorter, if necessary at all, and convalescence is significantly reduced. 

Many studies have shown that laparoscopic procedures result in decreased hospital stays, a quicker return to the workforce, decreased pain, better cosmesis, and better postoperative immune function. 

As attractive as minimally invasive surgery is, there are several limitations. Some of the more prominent limitations involve the technical and mechanical nature of the equipment. Inherent in current laparoscopic equipment is a loss of haptic feedback (force and tactile), natural hand-eye coordination, and dexterity. Moving the laparoscopic instruments while watching a 2-dimensional video monitor is somewhat counterintuitive. 

One must move the instrument in the opposite direction from the desired target on the monitor to interact with the site of interest. Hand-eye coordination is therefore compromised. Some refer to this as the fulcrum effect. 

Current instruments have restricted degrees of motion; most have 4 degrees of motion, whereas the human wrist and hand have 7 degrees of motion. There is also a decreased sense of touch that makes tissue manipulation more heavily dependent on visualization. 

Finally, physiologic tremors in the surgeon are readily transmitted through the length of rigid instruments. These limitations make more delicate dissections and anastomoses difficult if not impossible. 

The motivation to develop surgical robots is rooted in the desire to overcome the limitations of current laparoscopic technologies and to expand the benefits of minimally invasive surgery.

From their inception, surgical robots have been envisioned to extend the capabilities of human surgeons beyond the limits of conventional laparoscopy. The history of robotics in surgery begins with the Puma 560, a robot used in 1985 by Kwoh to perform neurosurgical biopsies with greater precision. 

Three years later, Davies et al performed a transurethral resection of the prostate using the Puma 560. This system eventually led to the development of PROBOT, a robot designed specifically for transurethral resection of the prostate. 

While PROBOT was being developed, Integrated Surgical Supplies Ltd. of Sacramento, CA, was developing ROBODOC, a robotic system designed to machine the femur with greater precision in hip replacement surgeries ROBODOC was the first surgical robot approved by the FDA.

Also in the mid-to-late 1980s a group of researchers at the National Air and Space Administration (NASA) Ames Research Center working on virtual reality became interested in using this information to develop telepresence surgery. 

This concept of telesurgery became one of the main driving forces behind the development of surgical robots. In the early 1990s, several of the scientists from the NASA-Ames team joined the Stanford Research Institute (SRI). Working with SRI’s other robotocists and virtual reality experts, these scientists developed a dexterous telemanipulator for hand surgery.

 One of their main design goals was to give the surgeon the sense of operating directly on the patient rather than from across the room. While these robots were being developed, general surgeons and endoscopists joined the development team and realized the potential these systems had in ameliorating the limitations of conventional laparoscopic surgery.

Current robotic surgical systems:

Today, many robots and robot enhancements are being researched and developed. Schurr at Eberhard Karls University’s section for minimally invasive surgery have developed a master-slave manipulator system that they call ARTEMIS. 

This system consists of 2 robotic arms that are controlled by a surgeon at a control console. Dario et al at the MiTech laboratory of Scuola Superiore Sant’Anna in Italy have developed a prototype miniature robotic system for computer-enhanced colonoscopy. 

This system provides the same functions as conventional colonoscopy systems but it does so with an inchworm-like locomotion using vacuum suction. 

By allowing the endoscopist to tele operates or directly supervise this endoscope and with the functional integration of endoscopic tools, they believe this system is not only feasible but may expand the applications of endoluminal diagnosis and surgery. 

Several other laboratories, including the authors’, are designing and developing systems and models for reality-based haptic feedback in minimally invasive surgery and also combining visual serving with haptic feedback for robot-assisted surgery.

In addition to Prodoc, ROBODOC and the systems mentioned above several other robotic systems have been commercially developed and approved by the FDA for general surgical use. These include the AESOP system (Computer Motion Inc., Santa Barbara, CA), a voice-activated robotic endoscope, and the comprehensive master-slave surgical robotic systems, Da Vinci (Intuitive Surgical Inc., Mountain View, CA) and Zeus (Computer Motion Inc., Santa Barbara, CA).

The da Vinci and Zeus systems:

The da Vinci and Zeus systems are similar in their capabilities but different in their approaches to robotic surgery. Both systems are comprehensive master-slave surgical robots with multiple arms operated remotely from a console with video assisted visualization and computer enhancement. In the da Vinci system. which evolved from the telepresence machines developed for NASA and the US Army, there are essentially 3 components: a vision cart that holds a dual light source and dual 3-chip cameras, a master console where the operating surgeon sits, and a moveable cart, where 2 instrument arms and the camera arm are mounted. 

The camera arm contains dual cameras and the image generated is 3-dimensional. The master console consists of an image processing computer that generates a true 3-dimensional image with depth of field; the view port where the surgeon views the image; foot pedals to control electrocautery, camera focus, instrument/camera arm clutches, and master control grips that drive the servant robotic arms at the patient’s side. 

The instruments are cable driven and provide degrees of freedom. This system displays its 3-dimensional image above the hands of the surgeon so that it gives the surgeon the illusion that the tips of the instruments are an extension of the control grips, thus giving the impression of being at the surgical site.

The Zeus system is composed of a surgeon control console and 3 table-mounted robotic arms. The right and left robotic arms replicate the arms of the surgeon, and the third arm is an AESOP voice-controlled robotic endoscope for visualization. 

In the Zeus system, the surgeon is seated comfortably upright with the video monitor and instrument handles positioned ergonomically to maximize dexterity and allow complete visualization of the OR environment. 

The system uses both straight shafted endoscopic instruments similar to conventional endoscopic instruments and jointed instruments with articulating end-effectors and 7 degrees of freedom.

The da Vinci and Zeus systems

Medical artificial intelligence:

AI in medicine means tasks are completed more quickly, and it frees up a medical professional’s time, so, he can perform other duties that can’t be automated, Artificial intelligence can manage medical records and other Data, 

Robots can collect, store, re-format, and trace data to offer faster & more consistent access, Data management is the most widely used application of artificial intelligence and digital automation.

Medical artificial intelligence refers to the use of AI technology / automated processes in the diagnosis and treatment of patients who require care, Medical records are digitized, appointments can be scheduled online, patients can check into health centers or clinics using their phones or computers.

AI is used for collecting of data through patient interviews and tests, Processing and analyzing results, Using multiple sources of data to come to an accurate diagnosis, Determining an appropriate treatment method, Preparing and administering the chosen treatment method, Patient monitoring & Aftercare, follow-up appointments etc.

Advantages of robot-assisted surgery:

These robotic systems enhance dexterity in several ways. Instruments with increased degrees of freedom greatly enhance the surgeon’s ability to manipulate instruments and thus the tissues. These systems are designed so that the surgeons’ tremor can be compensated on the end-effector motion through appropriate hardware and software filters. 

In addition, these systems can scale movements so that large movements of the control grips can be transformed into micromotions inside the patient.

Another important advantage is the restoration of proper hand-eye coordination and an ergonomic position. These robotic systems eliminate the fulcrum effect, making instrument manipulation more intuitive. 

With the surgeon sitting at a remote, ergonomically designed workstation, current systems also eliminate the need to twist and turn in awkward positions to move the instruments and visualize the monitor.

By most accounts, the enhanced vision afforded by these systems is remarkable. The 3-dimensional view with depth perception is a marked improvement over the conventional laparoscopic camera views. 

Also to one’s advantage is the surgeon’s ability to directly control a stable visual field with increased magnification and maneuverability. 

All of this creates images with increased resolution that, combined with the increased degrees of freedom and enhanced dexterity, greatly enhances the surgeon’s ability to identify and dissect anatomic structures as well as to construct microanastomose

Disadvantages of robotic-assisted surgery:

There are several disadvantages to these systems. First of all, robotic surgery is a new technology and its uses and efficacy have not yet been well established. To date, mostly studies of feasibility have been conducted, and almost no long-term follow up studies have been performed. 

Many procedures will also have to be redesigned to optimize the use of robotic arms and increase efficiency. However, time will most likely remedy these disadvantages.

Another disadvantage of these systems is their cost. With a price tag of a million dollars, their cost is nearly prohibitive. 

Whether the price of these systems will fall or rise is a matter of conjecture. Some believe that with improvements in technology and as more experience is gained with robotic systems, the price will fall.  

Others believe that improvements in technology, such as haptics, increased processor speeds, and more complex and capable software will increase the cost of these systems. Also at issue is the problem of upgrading systems; how much will hospitals and healthcare organizations have to spend on upgrades and how often? 

In any case, many believe that to justify the purchase of these systems they must gain widespread multidisciplinary use.

Another disadvantage is the size of these systems. Both systems have relatively large footprints and relatively cumbersome robotic arms. This is an important disadvantage in today’s already crowded-operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room. 

Some suggest that miniaturizing the robotic arms and instruments will address the problems associated with their current size. Others believe that larger operating suites with multiple booms and wall mountings will be needed to accommodate the extra space requirements of robotic surgical systems. The cost of making room for these robots and the cost of the robots themselves make them an especially expensive technology.

One of the potential disadvantages identified is a lack of compatible instruments and equipment. Lack of certain instruments increases reliance on tableside assistants to perform part of the surgery.6 This, however, is a transient disadvantage because new technologies have and will develop to address these shortcomings.

Most of the disadvantages identified will be remedied with time and improvements in technology. 

Only time will tell if the use of these systems justifies their cost. If the cost of these systems remains high and they do not reduce the cost of routine procedures, it is unlikely that there will be a robot in every operating room and thus unlikely that they will be used for routine surgeries.

Practical uses of surgical robots:

In today’s competitive healthcare market, many organizations are interested in making themselves “cutting-edge” institutions with the most advanced technological equipment and the very newest treatment and testing modalities. 

Doing so allows them to capture more of the healthcare market. Acquiring a surgical robot is in essence the entry fee into marketing an institution’s surgical specialties as “the most advanced.” It is not uncommon, for example, to see a photo of a surgical robot on the cover of a hospital’s marketing brochure and yet see no word mentioning robotic surgery inside.

As far as ideas and science, surgical robotics is a deep, fertile soil. It may come to pass that robotic systems are used very little but the technology they are generating and the advances in ancillary products will continue.

 Already, the development of robotics is spurring interest in new tissue anastomosis techniques, improving laparoscopic instruments, and digital integration of already existing technologies.

As mentioned previously, applications of robotic surgery are expanding rapidly into many different surgical disciplines. The cost of procuring one of these systems remains high, however, making it unlikely that an institution will acquire more than one or two. 

This low number of machines and the low number of surgeons trained to use them makes incorporation of robotics in routine surgeries rare whether this changes with the passing of time remains to be seen.

The Future of Robotic Surgery:

Robotic surgery is in its infancy. Many obstacles and disadvantages will be resolved in time and no doubt many other questions will arise. Many question have yet to be asked; questions such as malpractice liability, credentialing, training requirements, and interstate licensing for tele-surgeons, to name just a few.

Many of current advantages in robotic assisted surgery ensure its continued development and expansion. 

For example, the sophistication of the controls and the multiple degrees of freedom afforded by the Zeus and da Vinci systems allow increased mobility and no tremor without comprising the visual field to make micro anastomosis possible. 

Many have made the observation that robotic systems are information systems and as such they have the ability to interface and integrate many of the technologies being developed for and currently used in the operating room. 

One exciting possibility is expanding the use of preoperative (computed tomography or magnetic resonance) and intraoperative video image fusion to better guide the surgeon in dissection and identifying pathology. 

These data may also be used to rehearse complex procedures before they are undertaken. The nature of robotic systems also makes the possibility of long-distance intraoperative consultation or guidance possible and it may provide new opportunities for teaching and assessment of new surgeons through mentoring and simulation. 

Computer Motion, the makers of the Zeus robotic surgical system, is already marketing a device called SOCRATES that allows surgeons at remote sites to connect to an operating room and share video and audio, to use a “telestrator” to highlight anatomy, and to control the AESOP endoscopic camera.

Technically, many remains to be done before robotic surgery’s full potential can be realized. Although these systems have greatly improved dexterity, they have yet to develop the full potential in instrumentation or to incorporate the full range of sensory input. 

More standard mechanical tools and more energy directed tools need to be developed. Some authors also believe that robotic surgery can be extended into the realm of advanced diagnostic testing with the development and use of ultrasonography, near infrared, and confocal microscopy equipment.

Much like the robots in popular culture, the future of robotics in surgery is limited only by imagination. Many future “advancements” are already being researched. Some laboratories, including the authors’ laboratory, are currently working on systems to relay touch sensation from robotic instruments back to the surgeon. 

Other laboratories are working on improving current methods and developing new devices for suture-less anastomosis. When most people think about robotics, they think about automation. The possibility of automating some tasks is both exciting and controversial. 

Future systems might include the ability for a surgeon to program the surgery and merely supervise as the robot performs most of the tasks. The possibilities for improvement and advancement are only limited by imagination and cost.


Although still in its infancy, robotic surgery has already proven itself to be of great value, particularly in areas inaccessible to conventional laparoscopic procedures. It remains to be seen, however, if robotic systems will replace conventional laparoscopic instruments in less technically demanding procedures.

 In any case, robotic technology is set to revolutionize surgery by improving and expanding laparoscopic procedures, advancing surgical technology, and bringing surgery into the digital age. 

Furthermore, it has the potential to expand surgical treatment modalities beyond the limits of human ability. Whether or not the benefit of its usage overcomes the cost to implement it remains to be seen and much remains to be worked out. 

Although feasibility has largely been shown, more prospective randomized trials evaluating efficacy and safety must be undertaken. Further research must evaluate cost effectiveness or a true benefit over conventional therapy for robotic surgery to take full root.