Human Intervention

Medical Machines

Syed V. Ahamed , in Intelligent Networks, 2013

10.8.2 Design of a Medical Computer

Figure 10.12 depicts an initial design of medical computer. Human intervention in the computational aspects is minimized in this configuration to permit the machine to deploy its AI programs exhaustively before offering its analysis and conclusions to any medical problem. Such problems are loaded by the users as ill-defined or partially defined problems.

Figure 10.12. Architecture of a MM and MKPS/machine with multiple wafer level MPUs and KPUs. Modern CPUs and IOPs of circa 2010 can handle the intricate functions expected from the MM and provide connectivity to the WWW medical knowledge bases, backbone networks, and WANs.

The compilers, loaders, and linkers systematically breakdown the problem posed by the users into segments of what is known, what is uncertain, and what is unknown by the analysis of the medical noun objects (MNOs), medical verb functions (MVFs), and their convolution (⍟). The primary entities (i.e., the dominant MNOs), their crucial functions (i.e., the principal transactions or interactions MVFs), and the resulting effects of these crucial actions (i.e., the dominant convolutions ⍟s) are classified and compiled at the initial stages of medical computation. The segments of problem that are identifiable and documented in the KBs around the world (see the left and right links to Internet KBs in Figure 10.11) are assigned high confidence in the

segment(s) of the final solution. Conversely, the remaining MNOs, MVFs, and ⍟ s are assigned lower confidence levels. Their functions, i.e.,

become less confident, suspicious, or error prone. This forces the AI segments of the machine to research the nature of (MPF, ⍟, MNOs or mpf i , ⍟ j , mno k ) more extensively and offer a solution and a confidence limit associated with these unknown medical parameters. The machine is thus forced into offering an uncertain result. Such scenarios are common on medical ES problems such as Internist-I (Miller et al., 1982) or NeoMycin (Shortcliffe, 1976).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124166301000108

31st European Symposium on Computer Aided Process Engineering

Fabian Lechtenberg , ... Moisès Graells , in Computer Aided Chemical Engineering, 2021

6 Conclusions

Lists of (linguistically) relevant documents were identified with minimal human intervention. These lists have the characteristic of being populated by highly relevant documents in the upper ranks which allows to limit the selection of documents to download for the subsequent information extraction step. The candidate documents proved to have similar or even higher relevance to the domain than the documents in the seed corpus. A first qualitative assessment of the titles and abstracts indicates that these documents are truly relevant to the posed question. Our investigations showed that the proposed information retrieval methodology performs appropriately using the selected database and seed corpus taken from a chemical engineering field. This implies the potential of establishing a systematic machine-assisted search procedure for model parameters and knowledge, effectively reducing the workload of engineers in the PSE community and going beyond what a completely manual procedure could achieve. As of now, the methodology assesses document relevance by means of the BM25 metric. This metric allows for a pre-selection of documents but the next necessary step in the development of the whole information retrieval and extraction cycle is to systematically classify the true relevance of the documents by a machine-assisted information extraction methodology. Moreover, the methodology has been tested using only one database. Further work is in progress to extend the search and improve its efficiency (speed and accuracy).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323885065501522

Log Storage Technologies

Anton Chuvakin , ... Chris Phillips , in Logging and Log Management, 2013

Offline

Offline storage is the slowest and cheapest option. Offline systems typically require human intervention to retrieve an optical disk or tape, and restore the data onto online or near-line storage systems for data access. Offline storage is highly scalable with the purchase of additional optical disks or tapes and systems are typically cheaper than near-line storage. The issue for both near-line and offline storage will be the expected shelf life of the storage medium. The generally accepted shelf life of a CD/DVD is roughly 2 to 5 years and roughly the same life span for tape (National Archives). As you approach the end of the media life span, you will need to rerecord data to media if you have a longer retention policy than the life of the media.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978159749635300004X

Intelligent and smart enabling technologies in advanced applications: recent trends

Mayurakshi Jana , Suparna Biswas , in Recent Trends in Computational Intelligence Enabled Research, 2021

21.2.2 Machine learning

One of the most important applications of artificial intelligence (AI) is machine learning (ML). This is used to enhance the capability of the system to learn from the experience. Traditional ML algorithms depend on the given data. Also, they take decisions of a particular problem based on real-world knowledge. ML is used in detecting false alarms, drug discovery, pattern recognition, text-to-speech or speech-to-text recognition, entertainment recommendations, soil moisture prediction, video surveillance, etc.

The advantages of ML algorithms include:

ML algorithms do not need human intervention to write a program. They can make predictions based upon a given data set and real world knowledge.

ML algorithms can easily recognize patterns. That is why the prediction of diseases or recommendations for e-commerce websites are done straightforwardly.

They can handle high-volume multidimensional "big data."

The evaluation parameters including accuracy, sensitivity, specificity, etc. are high as the algorithms are continuously improving with experience gained from acquired data.

The disadvantages of ML algorithms include:

ML algorithms need a large number of resources to carry out prediction.

Choosing of an accurate ML algorithm is a difficult task. Therefore applying all ML algorithms to find the one with the highest accuracy is a difficult and time-consuming task.

Sometimes the training data set used in the ML algorithm is biased; therefore the predicted result will be biased (Fig. 21.2).

Figure 21.2. Process of machine learning algorithm.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128228449000451

Swarm intelligence based MSMOPSO for optimization of resource provisioning in Internet of Things

Daneshwari I. Hatti , Ashok V. Sutagundar , in Recent Trends in Computational Intelligence Enabled Research, 2021

Abstract

In the era of the Internet of Things, most devices communicate without human intervention through the Internet, but heterogeneous devices possess varied resource capability and require additional resources for processing. Management of resources becomes a crucial aspect imposing some challenges, namely resource management for the processing of tasks with reduced response time, energy consumption, authenticity, and bandwidth utilization. Therefore computing and communication resources through fog computing paradigm and the enhancement of intelligence through agents are offered. The proposed Multi-Swarm-Multi-Objective-Particle-Swarm-Optimization with agent technology is employed for managing diverse devices and dynamic changing of resources of fog devices to optimize the provisioning of resources for end users. The proposed work authenticates devices and provision resources based on fitness value and schedules by time shared and cloudlet-shared scheduling policies. It is evaluated using CloudSim Plus and performs better for the dynamic nature of fog devices, ensuring optimized resource utilization and reduced energy consumption and cost compared to best-fit and worst-fit algorithms.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128228449000281

Integrating Automation into Your Development Process

Bill Holtsnider , ... Joseph Gee , in Agile Development & Business Goals, 2010

X-unit test frameworks

X-unit test frameworks are testing frameworks designed for testing software without human intervention. These are the tools that support a Test First approach, allowing the inputs and outputs to be defined before the code is written to actually transform the inputs into the outputs.

X-unit frameworks work in conjunction with other tools and by themselves are valuable only to a developer; however, coupled with continuous integration, a build server and code coverage tools provide an ongoing benefit to the teams and the company. Because x-unit tests may be run inside an editor, they are valuable to a developer refactoring or enhancing code because the developers may be sure that they haven't broken or disrupted existing functionality. Used with a continuous integration approach and an automated build server, x-unit tests are executed with every build, facilitating not only testing of existing functionality, but also serving as a communication tool between developers and even bullpens. When code coverage is enabled, x-unit test frameworks provide a method to measure coverage of code by the x-unit test.

One communication benefit of x-unit frameworks is that they document the code they are testing. Because each x-unit test must invoke the code it is testing, it shows the expected form of parameters into the invoked code. The statement of "look at the x-unit, it shows how to call it and what the results are" is extremely powerful as it tests and documents usage at the same time.

Another benefit of x-unit frameworks is that developers must actually invoke their code, which isn't to say that they wouldn't find some other method to fire up the code, but rather that they must actually assemble the parameters and invoke the code just as other developers who would use the code. This promotes smaller parameters lists (lowering complexity) and object orientation (enhancing reusability) and generally simpler interfaces (not GUI interfaces, APIs). It is common for the original form of a parameter set to a method to be discarded in favor of something much more easy to wire together with code, resulting in improved readability and maintainability.

Java development can use the open source JUnit package right off the shelf. The JUnit drops right into many preferred Java development tools, such as Eclipse, Maven, Cobertura, and Hudson, producing unit test checks with each build and metrics for code coverage.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123815200000084

Internet Homepages

Michael L. Rodgers , ... David A. Starrett , in Encyclopedia of Information Systems, 2003

VI.B. Managing a Web Site

While it would be nice to think that a Web site can function without human intervention after the development and implementation of the site, this is probably not going to be a reality. In fact, successful sites need constant attention. By its nature, the Web is dynamic and has an inherent timeliness to it, allowing almost instant posting of up-to-date information. Many Web pages contain a "last-updated" stamp to inform visitors when the content was last updated. Many visitors will look for such a stamp in order to determine the timeliness of the page and perhaps imply its validity. As mentioned earlier, the type of site offered will to a degree determine the frequency and quantity of maintenance or changes. Certainly, a site containing information such as product prices or quantities must be updated frequently, if not instantaneously. Dates for sales, current promotions, messages, etc. might also require frequent updating. Even the site's overall look will likely change periodically as a new look is used to freshen up and increase interest in the site. Most successful large sites undergo periodic facelifts for aesthetics, content, and usability. Remember, visitors will come to a site and take a few seconds to look around. Users must be kept interested and involved so that they will explore further. A static, stale site will rapidly lose potential users. Keep the site current, interactive, and attention grabbing.

There are behind-the-scenes concerns in Web site management also. If the intent is to sell products on the site, the organization will need to consider what payment options to offer. Printable mail-in invoices are a possibility, but most customers prefer instant online payment options. This means establishing a mechanism for securely submitting credit card numbers. Data encryption protocols, such as the secure socket layer (SSL) protocol developed by Netscape for transmitting private documents, are commonly used for such purposes, SSL uses a private key to encrypt data transferred over the SSL connection. By convention, Web pages requiring an SSL connection have URLs beginning with https: instead of http:. SSL is considered an industry standard and is supported by both Netscape and Internet Explorer. Security entails not only secure transmission of information between site and customer, but also protection against unwanted access to the site. Web sites/servers must be guarded against viruses, denial of service attacks, and other types of hacker attacks. Virus protection software packages are widely available for reasonable costs from well-established and trusted companies. Firewalls are software-implemented gates that monitor and restrict access to the site. They are one of the main lines of defense against hackers. Firewalls should allow information to flow between an organization's server and legitimate visitors to the site, while at the same time minimizing access by hackers or others with unscrupulous intentions. Numerous firewall programs are also available at a reasonable cost.

Finally, good Web management practice includes maximization of reliability, availability, and data integrity. No computer is infallible. Computers can, and will, "crash." Having a server crash, taking all of your data, Web pages, etc. with it, can be a devastating experience. The wisely managed Web site exploits numerous features designed to minimize such losses in a crash. One mechanism for minimizing loss is the use of RAID or redundant array of independent (or inexpensive) disks; a category of disk drive that employs two or more drives in combination for performance. There are different levels of RAID, ranked for performance and reliability on a scale from 0 to 5. All utilize the same strategy of spreading or mirroring data across multiple disks to minimize the possibility that data will be irretrievably lost in the case of hard drive failure. A second approach is to regularly backup data to an external storage device. Typically, this is carried out by periodically writing data to a tape drive. This may mean writing the entire contents of a hard drive, or drives, to tape or merely writing data files or other dynamic content. Backups may be done monthly, weekly, daily, or even more frequently. Daily backups are common. Numerous tapes are used so that a Web site may be restored with data that may be many days or even weeks old. It might be necessary to go back to a point before a virus was introduced or data was corrupted. For instance, 14 tapes might be rotated with daily tape backups, ensuring that there are copies of data as it existed each day over the previous 2 weeks.

A third approach to ensure reliability is the use of a power supply backup. The most common method for doing this utilizes a UPS, a power supply that includes a battery to maintain power in the event of a power outage. Typically, a UPS keeps a computer running for several minutes after a power outage, enabling data to be saved and allowing the computer to be shut down properly. Many UPS devices offer a software component that enables automated backup and shut down procedures in case there is a power failure when the Web management team is not present. There are two basic types of UPS systems: standby power systems (SPSs), which monitor the power line and switch to battery power as soon as a problem is detected, and online UPS systems, which constantly provide power from built-in inverters, even when external power is present. In a SPS, the switch to battery can require several milliseconds, during which time the computer is not receiving any power. An on-line UPS avoids these momentary power lapses by always supplying power. These three approaches are not exclusive of each other, and, in fact, the highest level of reliability can be attained if all three approaches are used concurrently.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404000952

About the Book

Syed V. Ahamed , in Intelligent Networks, 2013

Chapter 3 deals with the explosive field of new services that the networks perform without human intervention. Machines that perform switching and network services are introduced and explored for newer medical services. Occasionally, medical services become new extensions of current and feasible network services. When complex medical services are called for, the medical machines partition strings of current/feasible services and "assemble" the instructions for these machines. Traditional computer system assemblers routinely assemble numeric and logical operations to perform a complex numeric or algebraic function such as an inverse tan function or a summation of a series. The precedence already exists. Chapter 3 further introduces the readers to switching functions in communication networks by examining the logical and number translation functions in traditional communication systems. Database technologies and devices perform much of the legwork, and communication paths for circuit-switched and data-switched networks are performed with high dependability and accuracy.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124166301000303

MPLS and IP Switching

Walter Goralski , in The Illustrated Network (Second Edition), 2017

Signaling and MPLS

There are two signaling protocols that can be used in MPLS to automatically set up LSPs without human intervention (other than configuring the signaling protocols themselves!). The Resource Reservation Protocol (RSVP) was originally invented to set up QoS "paths" from host to host through a router network, but it never scaled well or worked as advertised. Today, RSVP has been defined in RFC 3209 (again, there have been many updates) as RSVP for TE and is used as a signaling protocol for MPLS. RSVP is used almost exclusively as RSVP-TE (most people just say RSVP) by routers to set up LSPs (explicit-path LSPs), but can still be used for QoS purposes (constrained-path LSPs).

The Label Distribution Protocol (LDP), defined in RFC 3212, is used exclusively with MPLS but cannot be used for adding QoS to LSPs other than using simple constraints when setting up paths (as constrained-route LDP, or CR-LDP). It should be noted that RFC 3468 deprecates CR-LDP as it "focuses" on using RSVP-TE for MPLS traffic engineering (however, the existence of RFC 7358 means LDP is still in use). LDP is trivial to configure compared to RSVP. This is because LDP works directly from the tables created by the IGP (OSPF or IS-IS). The lack of QoS support in LDP is due to the lack of any intention in the process. The reason for the LDP paths created from the IGP table to exist is only simple adjacency. In addition, LDP does not offer much if your routing platform can forward packets almost as fast as it can switch labels.

A lot of TCP/IP texts spend a lot of time explaining how RSVP-TE works (they deal with LDP less often). This is more of an artifact of the original use of RSVP as a host-based protocol. It is enough to note that RSVP messages are exchanged between all routers along the LSP from ingress to egress. The LSP label values are determined, and TE constraints respected, hop by hop through the network until the LSP is ready for traffic. The process is quick and efficient, but there are few parameters that can be configured even on routers that change RSVP operation significantly (such as interval timers)—and none at all on hosts.

Although not discussed in detail in this introduction to MPLS, another protocol is commonly used for MPLS control plane signaling, as described in RFC 4364 (with updates). BGP is a routing protocol, not a signaling protocol, but the extensions used in multiprotocol BPG (MPBGP, or MBGP—but we'll use MPBGP to avoid confusion with multicast BGP as MBGP) make it well suited for the types of path setup tasks described in this chapter. With MPBGP, it is possible to deploy BGP- and MPLS-based VPNs without the use of any other signaling protocol. LSPs are established based on the routing information distributed by MPBGP from PE to PE. MPBGP is backward compatible with "normal" BGP, and thus use of these extensions does not require a wholesale upgrade of all routers at once.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128110270000199

Introduction

Sergios Theodoridis , in Machine Learning (Second Edition), 2020

Autonomous Cars

An autonomous or self-driving car is a vehicle that can move around with no or little human intervention. Most of us have used self-driving trains in airports. However, these operate in a very well-controlled environment. Autonomous cars are designed to operate in the city streets and in motorways. This field is also of interdisciplinary nature, where areas such as radar, lidar, computer vision, automatic control, sensor networks, and machine learning meet together. It is anticipated that the use of self-driving cars will reduce the number of accidents, since, statistically, most of the accidents occur because of human errors, due to alcohol, high speed, stress, fatigue, etc.

There are various levels of automation that one can implement. At level 0, which is the category in which most of the cars currently operate, the driver has the control and the automated built-in system may issue warnings. The higher the level, the more autonomy is present. For example, at level 4, the driver would be first notified whether conditions are safe, and then the driver can decide to switch the vehicle into the autonomous driving mode. At the highest level, level 5, the autonomous driving requires absolutely no human intervention [21].

Besides the aforementioned examples of notable machine learning applications, machine learning has been applied in a wide range of other areas, such as healthcare, bioinformatics, business, finance, education, law, and manufacturing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128188033000106