Friday, 16 August 2013

Security versus Life Safety

                The key factor is that for most installations, a person must be able to exit a space with no special knowledge. This means that anyone should be able to walk up to the door and exit with “No Special Knowledge.” This includes people of other languages, nationalities, and cultures. It is unacceptable to have a special lock that requires manipulation of some switch that is marked in a language everyone does not understand.
          There is a common violation to this rule. It is common to see magnetic locks operated by a nearby red push button. Strictly speaking, this does not comply with NFPA 101.
The second applicable document is the International Building Code (IBC; formerly known as the Universal Building Code, UBC). In other countries a set of model building codes is used, which are often very similar to the IBC. Again, like the NFPA 101 Life Safety Code these are standards, not law. In all countries, local authorities are recommended to reference the relevant standards as a mandatory reference to follow when designing buildings.
          In the United States, the primary international Life Safety Standard is the Life Safety Code, published by the National Fire Protection Association (NFPA) and is known as NFPA 101. NFPA 101 is revised every three years. Despite its title, the document is a standard and not legal code. Its statutory authority is derived from local Authorities Having Jurisdiction (AHJ), typically the local Fire Department. The language of the NFPA 101 is crafted in a form that makes it suitable for mandatory application and is adopted into law by local authorities.

If there is a decision to be made between Security and Life Safety, it is simple. Life Safety wins, each time, every time. No matter how the argument is constructed, no matter how extenuating the circumstances, there has to be a provision for occupants to escape quickly in an emergency — freely and with no special knowledge of how to exit.

          Remember, in most cases the door with an electrified lock in an access control system is also monitored for intrusion. Any time the door opens it is because of a legal entry, a legal exit, or intrusion. Accordingly access controlled doors are almost always equipped with a door position switch, which monitors whether the door is opened or closed.
          The switch does not know if the door is opened for a legal entry, legal exit, or for an intrusion. It just reports that the door is open. So the Access Control Panel must make a decision as to whether the door opening is appropriate or a concern to be reported as an alarm. It does this when a valid credential is read (card, key code, biometric) and reported to the Access Control Panel. After this, the panel executes several actions including.
                The key factor is that for most installations, a person must be able to exit a space with no special knowledge. This means that anyone should be able to walk up to the door and exit with “No Special Knowledge.” This includes people of other languages, nationalities, and cultures. It is unacceptable to have a special lock that requires manipulation of some switch that is marked in a language everyone does not understand.
          There is a common violation to this rule. It is common to see magnetic locks operated by a nearby red push button. Strictly speaking, this does not comply with NFPA 101.

Life Safety and Exit Devices

            Let me introduce you to your mantra: “It's all about life safety.” Once when I was 17 years old, in the mid-1960s, I went to an office building late in the afternoon where I was hoping to go to a business that had a product I was interested in. I had called earlier and they had said to “come anytime,” which unfortunately I took literally. (I was seventeen — what can I say?) Their offices were on the 8th floor of a 12-story building in downtown Columbus, Ohio. I arrived there just at 6:15 p.m. on a Friday evening and caught the front door of the Ground Floor Lobby as a man was walking out. I took the elevator to the eighth floor (most ceiling lights are off). I wandered down the dark hallway and located the company's office.
This may be the most important chapter in this book. In fact, if someone learns everything else in the book, but does not learn the material in this chapter, that person would be a failure in the security industry. Life Safety is the most important element of security. The most basic core element of all security is to protect life first, whether in regards to access control or preventing workplace violence or terrorism. This chapter discusses the apparent conflict between security and life-safety principles and how they should be resolved. We will discuss National and Local Life-Safety Codes and Regulations and how they apply to Electronic Access Control Systems. We will talk about how Locks and Exit Devices affect life safety.

Vehicles need to be controlled on a property to ensure proper traffic flow (employees here, visitors there, etc.) and to control entry to restricted areas, such as to an employee parking structure. The least expensive and most familiar way to do this is by using a standard lift-arm barrier gate. Lift-arm (or semaphore arm, named after the semaphore flags that were used to guide airplanes onto aircraft carriers) barrier gates come in a variety of configurations depending on the width of the lane and the frequency of opening. The simplest semaphore arm gate comprises a metal stand housing the motor and operating electronics and a gate arm made of wood, aluminum, or PVC plastic.

Types of Access Controlled Portals

An Access Portal is a passageway through which a person must pass in order to go from one access zone to another. When an access control portal is confronted, one knows that he is moving from one access area into another. Depending on the security configuration of the portal, access authorization may be required to enter, to leave, or to enter and leave through the portal. In most cases, access can be granted to a single individual, but for some higher security zones, access may require the presence of two or more authorized people. Access portals can be configured to work on a schedule so that access is free during some hours and requires authorization during others, or may be limited to certain individuals during certain hours.
            Pedestrian portals include standard doors, automatic doors, revolving doors, turnstiles, Man-Traps, and automated walls. Vehicle portals may include standard barrier gates, high-security barrier gates, and sally ports.
When a person confronts an access controlled portal they must show authorization to pass. The most common form of portal design is one designed to allow entry only to authorized users but also allow anyone inside the access zone to exit freely. This is normally accommodated by using a card reader, keypad, or biometric reader (or some combination of these) to authorize the user to enter. Exit is possible without being an authorized user; that is, any visitor or non-authorized person who was escorted inside by an authorized user is free to exit at any time and without any impediment. A typical portal has a locking system that must be unlocked in order to exit. Various Request-to-Exit sensors may be used including an Exit Push Button, Panic Exit Touch-Bar, or a motion detector over the door.
The standard door is the basic unit of an electronic access control system. Simple to understand, commonplace, and inexpensive, wherever there is a room or department to secure in most cases you can bet that it already has a standard door enclosing it. Standard doors are intuitive (everyone knows how to use one) and can allow for the passage of one or more users at a time.

            A typical standard door access control portal comprises a door (single- or double-leaf) with hinges and door handle hardware; an electrified lock; a card reader, keypad, or biometric reader; one or two door position sensors to tell the system if the door is open or closed; and a request-to-exit sensor. These devices report to an access control panel that may be located near the door or centrally in a utility room.

Sprint Retrospective

          Every member of the Scrum Team strives to improve Sprint by Sprint. The Sprint Retrospective is where the improvements are formulated. This meeting should never exceed four hours.

      As a natural break between Sprints, the Sprint Retrospective is when the Scrum Team sits back, reviews what happened during the prior Sprint, and formulates ways to improve their work and the way the work is conducted. The discussion might include:
 Whether or not the team members worked well together and why.
 Whether the team did more or less than it forecast and why.
 Whether the team has all the skills and facilities it needs to do the job.
 Whether or not the developers understood the requirements and why.
 Whether the team was able to complete the Sprint in line with the requirements, and if not, why not?
          While observation log analysis summarizes the tasks that novices could be objectively seen doing, it does not capture all aspects of their experience. In particular, investigating how novices feel about what they do and why they do it may also be instructive. In this section, we organize and classify some of the reflections made by novices in video diary entries about their experiences. These reflections help round out the picture of the social and hierarchical newcomer issues that define the novice software developer experience.
          Scaffolding the diary questions proved helpful in getting the subjects to think about their own learning experiences in university and industry. Particularly fruitful questions are listed here:
1. The idea of the Access Credential and Credential Reader and a comparison Database of Authorized Users is the centerpiece of the concept of Access Control Systems.
2. All Access Control Systems, whether electronic or procedural, use these same elements.
3. Early Electronic Access Control Systems used a variety of different card technologies including Magnetic Stripe, Barcode, Barium Ferrite, Hollerith, Rare-Earth, a very early form of Proximity technology, and Wiegand Wire Cards. More recent card technologies utilize 125 KHz Proximity, MiFare, and 13.56 MHz Contactless Smart Cards.
4. Keypads are also still in use, although are they less common.
5. The other type of Credential and Reader is the Biometric system, which compares a physical or behavioral attribute against a previously taken sample.

          At the end of the Sprint, you and the Scrum Master meet with the developers for a Sprint Review. This meeting is never more than four hours in length. The Scrum Team and key stakeholders get together and look at what happened during the prior Sprint and the increment of functionality that emerged during it. The review includes what was done, how much was done, how effectively it was done, and the usefulness of the work. The increment must be completed, meaning the increments must be a complete piece of usable software. Product Backlog items not completely done go back into the Product Backlog as “still to be done.” New requirements often arise during the Sprint Review. New opportunities and challenges also arise. Often, just seeing the increment of functionality evokes new ideas

Biometric Access Software

          The Scrum Team now starts creating software, starting on the day immediately after the day of Sprint planning. The developers create an increment of software functionality during the first Sprint. It may be larger or smaller than they have forecast. The entire Scrum Team collaborates during the sprint, clarifying the work. The work may have to be redefined, with requirements added or removed as needed, if the Development Team finds that it has time left or the remaining time is inadequate.

      Every day during the Sprint, the developers have a 15-minute meeting, called the Daily Scrum, to re-plan their upcoming work, always striving to deliver what was forecast. To maximize developer productivity, the Sprint objective must be agreed on by both the developers and the Product Owner. They agree that they will build as much of the required software as they can and that they may be redirected with every new Sprint. The Product Owner agrees that the requirements the developers are working on will not change during a Sprint. Anything that wasn't planned (including, for example, bringing developers to customer meetings) waits for the next sprint. Developer productivity arises from not being interrupted. Employing shorter sprints usually accommodates more frequent changes.
          The first task for the Scrum Master is to find developers to form the Development Team. The people on this team need to have the skills to turn the needs and requirements of the Product Owner (Product Backlog) into working increments of software with every Sprint.
     
      All members of the Scrum Team get together for introductions, discuss the upcoming work, and lay out the logistics for working together. The Scrum Team needs to know the vision (the needed and the hoped-for outcome), what outcomes would signify success and failure, and what the constraints are. The team looks only at the most important requirements and selects the maximum number that have a high likelihood of being developed in the upcoming Sprint. (The developers are skilled at decomposing big requirements into small actionable things that they can develop in a Sprint.)
          Scrum is a framework for managing complex work, such as software development. It is very simple, consisting only of three roles, three artifacts, and five events. Scrum binds them together with rules of play.


      The team of people that will be developing the software is called the Scrum Team. It consists of the person who wants the software developed (the Product Owner), a manager (the Scrum Master), and the developers. To avoid confusion, there can be only one Product Owner. The Product Owner decides what should be developed in every iteration, or Sprint in Scrum terminology, and evaluates the incremental results at the end of every Sprint. The Scrum Master manages the project the Scrum way. Some Scrum Masters are certified in Scrum; some have significant, verifiable experience in using Scrum successfully. Knowing how to manage Scrum Teams and projects is what counts.

Biometric Access Demand Transparency

          The world is uncertain. Software development is uncertain. Decisions still need to be made, and the organization that makes the best decisions thrives. Software in 30 days provides solid, actionable information about what is happening at least every 30 days. Each iteration is a constrained gamble. Almost without fail, the team is able to develop some software of value. Even in the worst case, where the team doesn't deliver anything, they have delivered valuable information about what is and isn't possible.
      Primavera based in Philadelphia and now owned by Oracle, develops project management software. The software is used to manage predictive process projects. The founders were aware of the irony that they had to use empirical software processes to build their predictive tools. However, to solve their problems, they had to resort to it.
          You have to have a firm grasp of the real facts to make a solid decision. The data or information you base your decision on must be transparent and clearly understood. In empirical software development, the increment is the clear and transparent information that decisions are based on.

      People must feel safe to have crucial conversations, to openly express what they think and feel, and to collaborate with others without reprisal or harm. These conversations are the heart of empiricism. Many workplaces are unsafe. Political agendas and hidden purposes pervert transparency. A manager's biggest job is to create a safe workplace, where people respect one another and feel safe to do their best, whatever that is.

          The people on the teams who are doing the work are the people best equipped to figure out how to do it. That thought runs counter to most management teachings. A manager is supposed to set a goal, figure out how to accomplish it, and then get people to follow that plan. However, then everyone is constrained to the experience, insights, and intelligence of the manager as they work.


      If the people doing the work are free to devise what to do, they can adapt to the circumstances, to the realities they face. They can share ideas and expertise to come up with the best solutions. They then try an approach, and if it doesn't work, they can try something else. This is self-organization. It applies the collective intelligence of all of the people on the team. They are not constrained to the manager's thinking and are free to do their best work.

Total Cost of Attendance

          Total cost of attendance represents the entire amount of funds needed to attend college and pay all necessary expenses each year. These expenditures are not limited to tuition and your dorm room and meal plan, also known as room and board. Often print and web resources will list these amounts, but there are many other costs associated with attending college.
          Depending on your situation, you can fund your college experience with money from different sources, which include your savings, money from parents and relatives, work, student loans, scholarships, grants, and so on. It’s important to consider how you will pay for the following categories of expenses:
          This is the fee to attend classes, and it is paid prior to the start of each semester. Costs stated on the college’s official website are the most accurate and up to date, but beware: You may experience increases during your enrollment. Also, expect to be charged minor additional fees for computer, health services, and fitness facilities.
          The Net Price Calculator is a web-based tool used to estimate how much and what types of financial aid you will qualify for when attending college. As of October 1, 2011, federal law requires all colleges and universities to have the Net Price Calculator feature on their websites. To meet this requirement, some schools link to an external third-party website, such as, which completes the calculations to estimate the cost of attendance and expected financial-aid package for students and their families?
          Now that you have assessed both sides of the equation, your expenses and financial support, with the assistance of your team of family and guidance counselor, you are ready to evaluate the results.
          To make your final analysis more straightforward and even a little easier, contemplate answers to these questions while you wait to receive admissions decisions:
Overall, how does the total cost of attendance for each school compare to your financial situation?
 Which schools on your final list create the most long-term financial responsibility/stress for you and your family?
 Which schools create the least long-term financial responsibility/stress for you and your family?
Which schools agree with your vision of going to college and don’t break the bank for you and your family?

Technical Control Classification

          If you stored your fortune in a safe deposit box, you wouldn’t keep the key hanging on a hook outside your house. The same should be true of your passwords: if you keep them written on sticky notes at your desk, they’re not safe. But even if you don’ write them down, there are many ways that someone might discover your
          I look at some of the ways your passwords might fall into the wrong hands, and give you tips on keeping them safe. I also discuss recovering forgotten passwords, backing up your passwords, and devising a plan to ensure that your passwords are available in case of emergency.
In the previous chapter, we discussed information security. After reading this chapter on information privacy, you should realize that these concepts are very much related in practice. Security is the protection of information against threats such as unauthorized access to data, falsification of data, or denial of service. A company can provide every security protection possible for your information against these threats without necessarily having the intent of protecting the confidentiality of your information. Thus, information privacy is different than information security, even if these concepts are often used interchangeably.
          Technical protection is also referred to as logical protection. A simple way to recognize technical protection is that technical controls typically involve a hardware or software process to operate. Let's start with technical controls, which are also known as automated controls.
          Technical protection may be implemented by using a combination of mandatory controls, discretionary controls, or role-based controls. Let's discuss each:
          There are three broad categories of authentication: something you know (usually a password); something you are (a unique, measurable physical characteristic, such as a fingerprint or iris pattern); and something you have (a smart card, token, or other device that can be identified uniquely—something I don’t cover in this book).

          Passwords provide a reasonably good way to protect access to data and resources, but in some cases they may not be enough. After all, passwords can be guessed, found, or stolen. So where greater security is needed, you may want to use other forms of authentication instead of a password—or, better yet, in addition to one.

Types of Cryptography

         No matter what efforts an organization may make to provide the best security possible, and all of the technologies and tools they might invest in, there are always security risks involved. Risk management is the process of identifying, assessing, and prioritizing the security risks an organization may face. As a result of this process, organizations may decide to accept the risks, try to mitigate or prevent those risks by investing in security protections, or share the security risks with another organization, for example by buying insurance. Organizations can refer to different standards for risk management that are available from organizations like the Project Management Institute, the National Institute of Science and Technology, and the International Standards Organization.
Most medium to large organizations today have security policies, which describe what the general security guidelines are for an organization. Security policies tend to be for internal use. The policies include a number of security procedures, which are specific statements describing how to implement the security policies. For example, a security policy could be “All users must change their passwords every two months.” One of its related security procedures could then describe steps to be taken to change one's password. Another procedure could involve an automated system to force users to change their password every two months, while an additional one could include actions that should happen if a user attempts to enter an unacceptable (not strong) password. A security policy should have clear goals and objectives, a detailed list of security policies and procedures, and also a list of actions for the enforcement of procedures.

         There are two main types of cryptographic system used today: asymmetric or symmetric. This is based on whether the same key is issued to encrypt and decrypt the data or not. In asymmetric encryption, two keys are used. The public key is used to encrypt messages. It is sent to any person or system with which one wishes to exchange encrypted messages. Using the public key, anyone can encrypt messages for the intended recipient, who will then use their private key to decrypt those messages. The public key and the private key are linked (forming a key pair), but only the recipient has the private key. This is also called public key cryptography since one of the keys can be shared with anyone (public).

QUALITY-BASED RANK FUSION METHOD

Quality-based rank fusion method depends not only on the ranking list of the unimodal classifiers, but also on the quality of the input images. Usually, this method applies on other biometric rank fusion approach with the modification by incorporating the quality of the input image. Quality based fusion methods usually do not have any training phase and hence can be used in other biometric information fusion process, such as fuzzy logic based fusion. There is no specific rule or general equation for quality based fusion method. Researchers can apply this method to any of their existing methods to improve the identification or verification rate. For example, Abaza and Ross introduced a quality based rank fusion method by modifying the exist.


Different existing methodologies for rank level fusion methods for multimodal, biometric system have been reviewed. The methods for rank level fusion include plurality voting method, highest rank method, Borda count method, logistic regression method, and quality-based rank fusion method. Advantages and disadvantages of all of these rank fusion methods have been discussed in the context of current state of the art in the discipline. Also, with the help of appropriate diagrams, outcomes of different possible rank fusion methods have been shown. In the next chapter, a new rank fusion method, the Markov chain based rank fusion method will be discussed which has several advantages over the traditional rank fusion methods.

Before we discuss each of the security tools, we need to briefly mention that all tools and policies are meant to address one or more core security goals, which are known as CIA – Confidentiality, Integrity, and Availability.

   Confidentiality involves making sure that only authorized individuals can access information or data. Integrity involves making sure that data are consistent and complete. For example, as a message is transmitted, its content is not modified unwillingly during the transmission. Finally, availability involves ensuring that system and/or data are available when they are needed. For systems to be considered highly available, the organization must protect them from disruptions not only due to security threats such as denial of service attacks, but also due to power outages, hardware failures, and system upgrades.

PLURALITY VOTING RANK FUSION METHOD

The plurality voting method is a positional method for rank aggregation which takes into account information about individuals' preference orderings. However, this method does not take into account a matcher's entire preference ordering, instead uses only information about each voter's most preferred alternative. This method is good for combining a small number of specialized matchers. In this method, the consensus ranking is obtained by sorting the identities according to their number of position in the top position. The algorithm is adopted from Abaza and Ross.
 Algorithm 1:             Plurality voting
o    Get three ranking lists from different biometric classifiers.
The highest rank method is good for combining a small number of specialized matchers and hence can be effectively used for a multimodal biometric system where the individual matchers perform well. In this method, the consensus ranking is obtained by sorting the identities according to their highest rank.
The steps in Algorithm 2 show the procedure of employing highest rank fusion method in a multimodal biometric system.
Algorithm 2: Highest rank
o    Get the ranking lists from different biometric classifiers.

o    For all ranking lists 

FUSION AFTER MATCHING BIOMETRICS

There are a number of challenges in this area, requiring further investigation. The first one is rooted in the choice of a fusion method, most appropriate for the application domain. The decision is often made ad-hoc, or based on non-essential constraints such as availability of the fusion module, low cost, etc, instead of being made based on actual fit of the application area and the method.
Arguably, one of the critical components of the multimodal biometric system development is an information fusion module. It is also a component which is most versatile in the form of input data (processed or unprocessed), types of features (geometric, signal, appearance-based, etc), and decision making process (adaptive, intelligent, fuzzy, learning-based, heuristic-based) it can utilize. Needless to say, the initial choice of biometric—physical, behavioral, soft, or social would both be an input to the information fusion process and dictate some of the choices to be made.
A general rule in theory assumes that the integration of data at an early stage of processing leads to systems which might be more accurate than those where the integration is introduced at later stages. Unfortunately, in practice, fusion at sensor level is hard to achieve, due to the different natures of the biometric traits, which might be hardly compatible (e.g., fingerprint and face). Moreover, most commercial biometric systems do not provide access to the feature sets vanishing the feasibility of a fusion at feature level. Fusions at matching level and at decision level do not require the creation of new databases or matching modules (the ones which constitute the mono modal subsystems are employed).

The rank level fusion approach is used in biometric identification systems when the individual matcher’s output is a ranking of the “candidates” in the template database sorted in a decreasing order of match scores (or, an increasing order of distance score in appropriate cases). The system is expected to assign a higher rank to a template that is more similar to the query. Plurality voting method, highest rank method, Borda count method, logistic regression method, Bayesian method and quality based method are reported in the literature to perform rank level fusion in multi biometric system. All of these biometric rank fusion approaches.

FUSION BEFORE MATCHING BIOMETRICS

Fusion in this category integrates evidences before matching or comparison of data samples against the user sample occurs. According to Kokar et al., “By combining low level features it is possible to achieve a more abstract or a more precise representation of the world”. Thus, biometric sources at the earlier stage contain much more information than after processing.
However, the extra costs of storing raw data and additional complexity in developing matching methods do not make this approach quite practical.
Fusion after-matching methods consolidate information obtained after individual biometric matching or comparison is done. Most multimodal biometric systems have been using these fusion methods as the information needed for fusion is easily available compared to fusion before matching methods. The matching scores, the ranking list (sorted order) based on matching scores or the individual biometric decision (Yes/No) can be used for fusion in this category.

Information fusion techniques applied in multimodal biometrics area are discussed. Usually, the information originated from different sources in a multimodal biometric system can be combined in senor level, feature extraction level, match score level, rank level, and decision level. Among all of the fusion methods, senor fusion and feature extraction level fusion considered as the stage for combining raw data or the actual biometric data. Match score, rank and decision level fusion methods combine processed data or data obtained through some experimentations. There is also another novel fusion method which is becoming highly popular: the fuzzy fusion.

BIOMETRIC INFORMATION FUSION

Information fusion can be defined as “an information process that associates, correlates and combines data and information from single or multiple sensors or sources to achieve refined estimates of parameters, characteristics, events and behaviors”. A good information fusion method allows the impact of less reliable sources be lowered compared to reliable ones. A number of disparate research areas including robotics, image processing, pattern recognition, information retrieval etc. utilize and describe information fusion in their context. Thus, information fusion established itself as an independent research area over the last decade for its impact on a vast number of disparate research areas. For example, the concept of data and feature fusion initially occurred in multi-sensor processing. In fact, information fusion was for a long time used in engineering and signal processing fields, as well as in decision-making and expert systems. By now, several other research fields found its application useful. Besides the more classical data fusion approaches in robotics, image processing and pattern recognition, the information retrieval community has been known to combine multiple information sources. The basic building block of an information fusion system which fuses source information at the early stage of the system?
BIOMETRIC INFORMATION FUSION
Due to some problems associated with the unimodal biometric data, such as small variation over the population, large intra-variability over time, absence of biometric sample in portion of a population etc., the use of multimodal biometrics is a first choice solution. The main objective of a multimodal biometric system is to improve the recognition performance of the system and to make the system robust over the limitations associated with unimodal biometric systems. Over the years, several approaches have been proposed and developed for multimodal biometric authentication system with different biometric traits and with different fusion mechanisms.

Multimodal biometric systems use multiple sources of biometric information, whereas information fusion is essential for analysis, indexing and retrieval of such information . There are numbers of fusion techniques for any particular information. Choosing appropriate fusion techniques for any specific information depends on the necessity of the application and the performance of the fusion techniques proven by previous research. There is a consensus in biometric literature that all various levels of multimodal biometric information fall into two broad categories: before matching and after matching fusion. Fusion before matching category contains sensor level fusion and feature level fusion, while fusion after matching contains match score level fusion, rank level fusion and decision level fusion. A novel fusion mechanism has been established recently in BT Lab is based on fuzzy logic fusion, and hence named a fuzzy biometric fusion. Fuzzy biometric fusion can be employed either in the initial stage, i.e. before matching occurred or in the latter stage, i. e. after matching occurred.

INFORMATION SOURCES FOR MULTIBIOMETRIC SYSTEMS

Development of a multi biometric system for security purposes is not a trivial task. As with any unimodal system, the data acquisition procedure, sources of information, level of expected accuracy, system robustness, user training, data privacy, and dependency on proper functioning of hardware and proper operational procedures impact directly the performance of security system. While using more than one data source alleviates some issues (such as noisy data, missing samples, errors in acquisition, spoofing etc.), this advantage does not come free. The choice of biometric information that needs to be integrated or fused must be made, information fusion methodology should be selected, cost vs. benefit analysis needs to be performed, processing.
For many applications, there are additional sources of non-biometric information that can be used for person authentication, while in others the use of a single biometric is not sufficiently secure or does not provide adequate coverage of the user population. This can be indicated by such parameter as Failure to Enroll rate. Thus, multi biometric system emerged as a way to provide more secure and reliable person authentication system under those conditions.

It must be pointed out that in literature there is a slight difference between two terms. The term multimodal biometric system refers specifically to those biometric systems where more than one biometric modalities are used. The term multi biometric is more generic and includes multimodal systems and some other configurations using only one biometric modality with different samples instances or algorithms.

MODEL-BASED BIOMETRICS

Identifying patterns in behavioral biometrics, in general, is a slightly different and somewhat more complex problem than identifying features in physiological biometrics. Examples of behavioral biometrics include signature, voice, gait, and typing patterns. Due to temporal dynamic features associated with each biometric (samples must be observed over period of time for best matching results), these problems are often treated in a class of signal-processing methods. In a nutshell, the task and the overall biometric system architecture remain the same; however upon closer examination; some very specialized methods taking advantage of unique continuous nature of that biometrics have been developed.
Different image processing methods and algorithms that are popular in biometric data processing has been presented. In the case of the most of the biometric identifiers used today, image of that identifier is mainly the input to the biometric system. Thus, the processing of the biometric images is very essential for efficient and reliable performance of the biometric system. Usually, the main methods which are used for biometric image processing are digitization, compression, enhancement, segmentation, feature measurement, image representation, image models and design methodology. The feature extraction methods have been classified as appearance-based and topological feature-based, and illustrated on example of different fingerprint recognition
The optimal biometric system is one having the properties of distinctiveness, universality, permanence, acceptability, collectability, and security. As we saw in the introductory chapters, no existing biometric security system simultaneously meets all of these requirements. Despite tremendous progress in the field, over the last decades researchers noticed that while a single biometric trait might not always satisfy secure system requirements, the combination of traits from different biometrics will do the job. The key is in aggregation of data and intelligent decision making based on responses received from individual (unimodal) biometric systems.

Thus, Multimodal biometrics emerged as a new and highly promising approach to biometric knowledge representation, which strives to overcome problems of individual biometric matchers by consolidating the evidence presented by multiple biometric traits. As an example, a multimodal system may use both face recognition and signature to authenticate a person. Due to reliable and efficient security solutions in the security critical applications, multimodal biometric systems have evolved over last decade as a viable alternative to the traditional unimodal security systems.

TOPOLOGY-BASED INTELLIGENT PATTERN ANALYSIS IN BIOMETRICS

The optimal biometric system is one having the properties of distinctiveness, universality, permanence, acceptability, collectability, and security. As we saw in the introductory chapters, no existing biometric security system simultaneously meets all of these requirements. Despite tremendous progress in the field, over the last decades researchers noticed that while a single biometric trait might not always satisfy secure system requirements, the combination of traits from different biometrics will do the job. The key is in aggregation of data and intelligent decision making based on responses received from individual (unimodal) biometric systems.
Thus, Multimodal biometrics emerged as a new and highly promising approach to biometric knowledge representation, which strives to overcome problems of individual biometric matchers by consolidating the evidence presented by multiple biometric traits. As an example, a multimodal system may use both face recognition and signature to authenticate a person. Due to reliable and efficient security solutions in the security critical applications, multimodal biometric systems have evolved over last decade as a viable alternative to the traditional unimodal security systems.
The goal of any intelligent processing is to minimize overhead associated with performing computations while at the same time to maximize an output. The same principle governs behavior of most public and commercial organizations—to achieve high production by resource and processes optimization. While appearance-based methods excel in capturing even subtle features in the multitude of high-dimensional data, sometimes generalizing the results and noting common patterns leads to process optimization without sacrificing the security system performance. This section presents topology-based methods, which work best with biometric data that has prominent geometric features, such as fingerprint or hand/palm biometrics. We start by outlining the topology-based methodology with the roots in computational geometry.


Thursday, 15 August 2013

INTELLIGENT PATTERN ANALYSIS IN BIOMETRICS

The goal of any intelligent processing is to minimize overhead associated with performing computations while at the same time to maximize an output. The same principle governs behavior of most public and commercial organizations—to achieve high production by resource and processes optimization. While appearance-based methods excel in capturing even subtle features in the multitude of high-dimensional data, sometimes generalizing the results and noting common patterns leads to process optimization without sacrificing the security system performance. This section presents topology-based methods, which work best with biometric data that has prominent geometric features, such as fingerprint or hand/palm biometrics. We start by outlining the topology-based methodology with the roots in computational geometry.
Identifying patterns in behavioral biometrics, in general, is a slightly different and somewhat more complex problem than identifying features in physiological biometrics. Examples of behavioral biometrics include signature, voice, gait, and typing patterns. Due to temporal dynamic features associated with each biometric (samples must be observed over period of time for best matching results), these problems are often treated in a class of signal-processing methods. In a nutshell, the task and the overall biometric system architecture remain the same; however upon closer examination; some very specialized methods taking advantage of unique continuous nature of that biometrics have been developed.

Different image processing methods and algorithms that are popular in biometric data processing has been presented. In the case of the most of the biometric identifiers used today, image of that identifier is mainly the input to the biometric system. Thus, the processing of the biometric images is very essential for efficient and reliable performance of the biometric system. Usually, the main methods which are used for biometric image processing are digitization, compression, enhancement, segmentation, feature measurement, image representation, image models and design methodology. The feature extraction methods have been classified as appearance-based and topological feature-based, and illustrated on example of different fingerprint recognition.

ALTERNATIVE IDENTIFICATION DEVICES

To address the biometric verification component and determine which biometric system is the best for given applications in the complex clinical environment, the fingerprint scan system was added on the LTVS to run parallel with the facial recognition systems. Each solution was evaluated and documented for the advantages and pitfalls in the clinical environment.

The fingerprint scanning system is added onto the FRS (facial recognition system) to become FRSS (facial and fingerprint recognition system). Because of the modular design of the LTVS, such a modification was not difficult to accomplish.
In the minimally invasive spinal surgery (MISS) ePR system we discussed in Chapter 24, there was no patient identification component to verify the person is the actual patient to be operated on. After the MISS ePR had been in clinical evaluation for several months, the surgery in charge suggested that a patient verification system be integrated with the ePR system. Since a surgical patient is mostly immobile, there was no advantage to using the facial verification system, so we decided to use the fingerprint scanning method. We learned a great deal about the fingerprint scanning method, not only its characteristics but also the method of integration with a larger imaging informatics system, during the latter part of the LTVS development. In the previous environment the fingerprint method had been developed as the second patient verification system in addition to the facial verification method. So it was not difficult to add this module to the existing minimally invasive spinal surgery (MISS) ePR. This section summarizes the development, operation procedure, and the current status.
In this chapter we discuss imaging informatics by taking advantage of other existing information technologies (IT) not necessary in medical imaging, and integrating them with medical imaging advances. In particular, we present a location tracking and verification system (LTVS) in clinical environment. Although technologies used in the LTVS are not necessary in the cutting edge of medical imaging and in IT, the combination of these technologies can help resolve patient workflow and patient protection issues in the clinical environment that have been discussed in the imaging informatics community for many years.

We start the presentation by defining what LTVS is and why we need it. Currently available tracking, identification, and verification technologies are introduced. We give a step-by-step modular system integration of the LTVS—from the prototype design and development, to implementation and clinical evaluation of the prototype in an outpatient imaging center, and to cost analysis. The prototype was used to track the movement of the patients and personnel to improve the efficiency in imaging procedure workflow, and safeguard patients in the clinical environment. We conclude with a discussion of a fingerprint scanning module for surgical patient identification and verification to ensure that the person is the right patient to be operated on.

THE BASICS OF FINGERPRINT ANALYSIS

Fingerprint-analysis algorithms used by scanner systems are designed to capture and recognize the same basic features that have been employed by fingerprint-analysis experts for decades. At its core, fingerprint analysis seeks to identify specific minute features (minutiae) within the fingerprint structure and compare them to others in a database. Digital fingerprint scanners can also add other information, such as specific distances between minutiae and the direction of whorls in the fingerprint structure, to further increase the uniqueness of the measurement and thereby decrease FAR and FRR numbers.
In both the classic “ink” type of fingerprint recording and in the digital capture of a fingerprint using one of the technologies listed above, the fingerprint friction ridge, the raised portion that contacts the glass surface of the scanner, is recorded as black, and the fingerprint valley, which is filled with air, is recorded as white. Keeping these in mind, fingerprint experts have developed a list of minutiae that can be found in most fingerprints. The primary minutiae that are employed in fingerprint characterization include.
Iris scanners capture the minute patterns in the iris, the colored region between the pupil and the sclera, and compare these patterns to previously stored iris scans. Iris scans have the advantage that eyeglasses and contact lenses need not be removed for the system to operate properly.
The first step in the process is the isolated capture of the iris, without the sclera, pupil, and any light reflections that might be present. This is usually accomplished by smoothing (averaging) the picture so that the disk of the pupil can be more easily identified by software. Next, software locates the best fit circle that just inscribes the pupil and the best circle that captures the outer edge of the iris.

In the 1930s, Simon and Goldstein published a paper in which they reported that the pattern of minute blood vessels in the retina of the eye is unique and could be used as the basis for identifying a person. The eye is protected from the external environment much as the brain is and, as a critical sensory organ, is also protected carefully against injury throughout a person's lifetime. In a subsequent study performed by Dr. Paul Tower in 1955, it was shown that these retinal blood vessels unique, even in the specific case of identical twins, where such a difference is least likely to occur. In fact, Tower showed that, of all the factors that he compared between identical twins, these retinal blood vessels showed the least similarity.

Pattern Recognition Problems

Fingerprint scanners are the most widely used form of personal biometric today, due largely to their small size and ease of use. A person simply places his finger on the reader, and he is either granted or denied access. In this section, we will examine the operation of the fingerprint scanner at the device and analysis levels so that technology selection and implementation decisions can be made with better awareness of possible limitations.
At the very beginning, the reader needs to be cautioned that the degree to which a person's fingerprint templates (the recorded characteristics of the finger) are protected while being stored by the operating system may create an easier attack point than trying to break the system by creating a fake fingerprint. These biometric fingerprint scanners should be used with careful attention paid to encryption and protection of the user fingerprint templates. Failure to do so will directly affect the strength of protection offered by the system.
In comparison with rich literature in feature extraction-oriented LDA for SSS problems, studies on pattern classification aspect of LDA for SSS problems are quite few. To the best of our knowledge, except for large margin linear projection (LMLP), minimum norm minimum squared-error (MNMSE), and maximum scatter difference (MSD) there is almost no endeavor in this direction.

Since FDC, MSE, FLD, and FSD all involve the computation of the inverse of one or several scatter matrices of sample data, it is a precondition that these matrices should be nonsingular. In the small sample size (SSS) pattern recognition problems such as appearance-based face recognition, the ratio of dimensionality of input space to the number of samples is so large that the matrices involved are all singular. As a result, standard LDA methods cannot directly be applied to these SSS problems. Due to the prospective applications to biometric identification and computer vision, LDA for solving the SSS problems becomes one of the hottest research topics in pattern recognition.