Monday, September 30, 2013

Unit_06_ The traditional approach to requirements-confusing-



The Traditional Approach to Requirements

Traditional and Object-Oriented Views of Activities and Use Cases Data Flow Diagrams Documentation of DFD Components Locations and Communication through NetworksLearning ObjectivesAfter reading this chapter, you should be able to: 

  • Explain how the traditional approach and the object-oriented approach differ when modeling the details of a use case
  • List the components of a traditional system and the symbols representing them on a data flow diagram 
  • Describe how data flow diagrams can show the system at various levels of abstraction
  • Develop data flow diagrams, data element definitions, data store definitions, and process descriptions
  • Develop tables to show the distribution of processing and data access across system locations.
Traditional and Object-Oriented approaches do system development differ in how a system's response to an event is modeled and implemented. 

In this approach, entity-relationship diagrams represent things in the user's work domain. The Models that represent activities ( use-cases) and the interaction among activities and the things are very different from object-oriented models. Object-Oriented Activity models are more widely used than traditional models, these models are still used in many contexts, including documentation for older systems and to present requirements for new systems to users in an easy to read fashion. 

TRADITIONAL AND OBJECT-ORIENTED VIEWS OF ACTIVITIES AND USE CASES

The traditional approach views a system as a collection of processes.(some performed by people and some by computers)

Traditional computer processes are much like procedural computer programs; they contain instructions that execute in a sequence. 

When the process executes, it interacts with stored data reading data values and then writing other data values back to the data file.

The Traditional approach to the systems, then, involves processes, stored data, inputs and outputs and it includes processing models that emphasize these system features such as shown on the left side of the image. 

The Object-Oriented approach views a system as a collection of intersection objects. The objects are based on things in the problem domain. Objects are capable of behaviors(Methods) that allow them to interact with each other and with people using the system. There are no conventional computer processes, data files or databases per se. Objects carry out the activities and remember the data values. When modeling what the system does in response to an event, the Object-Oriented approach includes models that show objects, their behavior and their interactions with other objects. 
DATA FLOW DIAGRAMS
The traditional approach to information system development describes activities as process carried out by people or commuters. A graphical model that has been proven to be quite valuable for modeling process is the DATA FLOW DIAGRAM. {data flow diagram(DFD) a diagram that represents system requirements as process, external agents, data flows and data stores} The data flow diagram is graphical system model that shows all the main requirements for an information system in one diagram: inputs and outputs, process and data storage.
 DFD is easy to read because everyone working on a development project can see all aspects of the system working together at once. Also because is it a graphical model and because there are only 5 symbols to learn. End users, management, and all information systems workers can typically read in interpret the diagram with minimal training.  
[SQUARE]An External agent{a person or organization, outside the system boundary, that supplies data inputs or accepts data outputs.} 
[RECTANGLE WITH ROUNDED CORNERS] Process{a symbol that represents an algorithm or procedure by which data inputs are transformed into data outputs. }
[LINES WITH ARROWS] Data Flows { represents data movement among processes, data stores and external agents.}
[FLAT-OPENENDED RECTANGLE] Data Store { a place where data is held pending future access by one or more processes.}
Each data store represents an entity from the CSS entity-relationship-diagram. The data flow diagram shows the system use case in response to this one even in graphical form and integrates processing triggered by the event with the data entities modeled using the ERD.
DATA FLOW DIAGRAMS AND LEVELS OF ABSTRACTION
There are many different types of data flow diagrams produced to show system requirements. Other data flow diagrams show the processing at either a higher level( a more general view of the system) or at a lower level( a more detailed view of one process) These Differing views of the system, are called  LEVELS OF ABSTRACTION { any modeling technique that breaks the system into a hierarchal set of increasingly more detailed models.}
DFD's can show either higher-level or lower-level views of the system. The high-level processes on one DFD can be decomposed into separate lower-level, detailed DFD's. Process on the detailed DFDs can also be decomposed into additional diagrams to provide multiple levels of abstraction.
The image shows how at each level of detail, additional information is provided about one process at the next higher level. 
CONTEXT DIAGRAM
Context Diagram{a DFD that summarizes all processing activity within the system in a single process symbol.} 
A diagram that describes the most abstract view of a system. All external agents and all data flows into and out of the system are shown in this one diagram, with the entire system represented as one process.  The context diagram clearly shows the system boundary, the system scope is defined by what is represent with in the single process and what is represented as external agents. External agents that supply or receive data from the system are outside the system scope, and everything else is inside the system scope. This diagram does not usually show data stores because all the system's data stores are considered to be within the system scope. ( part of the internal implementation of the process that represents the system.) Data stores may be shown when they are shared by the system being modeled and another system. this diagram is usually created in parallel with the initial list of use cases and events. Each trigger for an external event become an input data flow and the source becomes and external agent. Each response becomes an output data flow and the destination becomes and external agent. triggers for temporal events are not data flows so there are no input data flows for temporal events in a context diagram
DTD FRAGMENT
{ represents the system response to one event within a single process symbol.} The fragment is created for each use case triggered by an event in the event table. each fragment is a self- contained model showing how the system responds to a single event. the analyst usually creates the fragments one at a time focusing attention on each part of the system. the fragments are drawn after the event table and context diagram are complete. The pictured fragments show details of interactions among the process, external agents, and internal data stores. the data stores used on a DFD fragment represent entities on the ERD. Each DFD fragment shows only those data stores that are actually needed to respond to the event. 
THE EVENT- PARTIONED SYSTEM MODEL
{ A DFD that models system requirements by using a single process for each event in a system or subsystem.} All the DFD fragments for a system or subsystem can be combined on a single DFD called the event-partitioned system model(diagram0). 
Diagram0 is used primarily as a presentation tool. It summarizes and entire system or subsystem in greater detail than a context diagram.
Reasons to Avoid diagram 0
The information context duplicates the set of DFD fragments. 
The diagram is often complex and unwieldy, particularly for large systems that respond to many events. 
Redundancy and complexity are two DFD characteristics that analysts should avoid whenever possible.
When a system responds to many events, it is commonly divided into subsystems, and a context diagram is created for each subsystem. The example illustrates a system divided into subsystems based on use case similarities, including interactions with external agents, interactions with data stores, and similarities in required processing. The next image shows the context diagram for the order-entry system.



















Always remember that diagram 0 is used as a presentation aid. The DFD fragments show which process interact with which individual data stores. 
DECOMPOSITION TO SEE ONE ACTIVITY'S DETAIL
 Some DFD fragments involve a lot of processing that the analyst needs to explore in more detail. Further decomposition helps the analyst learn more about the requirements while also producing needed documentation. The diagram decomposes a process into four subprocesses. 
1. Record customer information
2.Record order 
3.process order transaction
4.produce confirmation
These subprocesses are viewed as the major four steps required to complete the activity. 


PHYSICAL AND LOGICAL DFDs
Physical DFD{ a DFD that includes one or more assumptions about implementation technology}
Logical DFD{ A DFD developed under the assumption of perfect internal technology}
Perfect Internal Technology{ an assumption that includes such technology capabilities as instant processing and data retrieval, infinite storage and network capacity, and a complete absence of errors}
A DFD can be a physical system model, a logical system model, or a blend of the two. In a physical DFD one or more assumptions about implementation technology will be embedded in the DFD. These assumptions can take many forms and might be very difficult to spot. A logical DGD is drawn as if the system might be implemented with any technology. One way that analysts develop logical DFDs is to assume that the system is implemented with perfect internal technology. Specifics of the assumption include process that never make mistakes, data stores that can hole unlimited amounts of data and flows and processes with infinite capacity and zero transmission time . All alternatives are described as possible, and if the model is a logical model, you should not be able to tell how the system is implemented.At the same time- the processing requirements should be fairly detailed down to indicating what attribute values are needed.
Inexperienced analysts often develop DFDs. The path to developing such a model is simple
Model everything the current system does exactly the way it does it. The problem with this approach is that design assumptions based on outdated technology limitations can become inadvertently embedded in the new system this problem is most prevalent when analysis and design are performed by different persons or teams. The designer may not realize that some of the requirements embedded in the DFDs are simply reflections of the way things were in the past, not the way they should be in the future. 
Physical DFDs are sometimes developed and used during the last stages of analysis or early stages of design. They are useful models for describing alternate implementations of a system prior to developing more detailed design models. Analysts should avoid creating physical DFDs during all analysis activities, except when generating design and implementation alternatives. Even during that activity, analysts should clearly label physical DFDs as such so readers know the model represents one possible implementation of the logical system requirements. 
EVALUATING DFD QUALITY
A high-quality set of DFDs is readable, is internally consistent and accurately represents system requirements. Accuracy of representation is determined primary by consulting users. A project team can ensure readability and internal consistency by applying a few  simple rules to DFD construction. Analysts can apply these rules while developing the DFDs or during a separate quality check after preparing DFD drafts. 
1. Minimizing Complexity. If to much information is presented at once someone may experience INFORMATION OVERLOAD{ difficulty in understanding that occurs when a reader receives to much information at one time.} The key in avoiding this is to divide information into small and relatively independent subsets. Each one should contain a comprehensible amount of information that people can examine and understand in isolation. The analyst can avoid information overload by following two simple rules of DFD construction
A. 7+/2
B. Interface Minimization
The 7+/2 rule( miller's number) shows that the number of information chunks that a person can remember and manipulate at one time varies between five and nine. 
a. a single DFD should have no more than 7 plus or minus 2 processes
b. no more than seven plus or minus two data flows should enter or leave a process, datastore or element on a single DFD
2. Minimization of interfaces{ a principle of model design that seeks simplicity by limiting the number of connections among model components} As with information chunks the number of connections that a person can remember and manipulate is limited. Process on a DFD represent chunks of business or processing logic. They are related to other processes, entities, and data stores by data flows. A single process with a large number of interfaces may be to complex to understand. An analyst can usually correct the problem by dividing the process into tow or more subprocesses each of which should have fewer interfaces. 
Pairs or groups of processes with a  large number of data flows between them are another violation in the interface minimization rule. The best division of work among processes is the simplest and the simplest division is on the requires the fewest interfaces among the processes. 
3.ENSURING DATA FLOW CONSISTENCY
An analyst can often detect errors and omissions in a set of DFDs by looking for specific types of inconsistency. Three consistency errors are 
1. Differences in data flow content between a process and its process decomposition.
2. Data outflows without corresponding data inflows
3. data inflows though corresponding outflows



     A process decomposition shows the internal details of a higher-level process in a more detailed form. In most cases the data content of flows to and from a process at one DFD level should be equivalent to the content of data flows to and from all processes in a decomposition. This equivalency is called Balancing{ equivalence of data content between data flows entering and leaving a process and data flows entering and leaving a process in decomposition DFD} , and the higher level DFD and the process decomposition DFD are said to be “in balance”
The term”data content” -data flow names can vary among DFD levels for a number of reasons, including the decomposition of one combined data flow into several smaller flows. The analyst must be careful to look at the components of data flows, and not just the names. Unbalanced DFDs may be acceptable when the imbalance is due to data flows that were ignored at the higher  levels. 
Another type of DFD inconsistency can occur between the data inflows and outflows of a single process or data store. By definition, a process transforms data inflows into data outflows. In a logical DFD data should not be needlessly passed into a process. 
a. All data that flows into a process must flow out of the process or be used to generate data that flows out of the process. 
b. All data that flows out of a process must  have flowed into the process or have been  generated from data that flowed into the process. 
A process or data store with a data input that is never used to produce data is called a BLACK HOLE. Another example is any process with at least one data inflow but no data outflows. 

Miracle{a process or data store with a data element that is created out of nothing}
Analysts can sometimes spot black holes and miracles simply by examining the DFD. 
Any data element that is read from a data store must have been previously written to that data store. Any data element that is written to a data store must eventually be read from the data store. 
Evaluating data flow consistency is a straightforward but tedious process. 
Without precise process descriptions it is impossible for the tool, to know what data elements are used as input or generated as output bu internal processing logic
DOCUMENTATION OF DFD COMPONENTS
Data flow diagrams graphically summarize interactions among three types of internal system components- processes, data flows, and data stores, but additional details about each component need to be described. First each lowest level process needs to be described in detail The analyst needs to define each data flow in terms of the data elements it contains. Data stores also need to be defined in terms of the data elements. Th analyst also needs to define each data element. 
PROCESS Descriptions

Each process on a DFD must be defined formally. There are several options for process definition. 
Process decomposition, as discussed a higher level process is formally defined by a DFD that contains lower level processes. These process may in turn be further decomposed into even lower level DFDs.
Eventually a point is reached at which a process does not need to be defined further by a DFD. This occurs when a process becomes so simple that it can be described adequately by two other methods: Structured english, decision tables or with decision trees. Each method, the process is described as an algorithm and an analyst chooses the most appropriate presentation format by determining which is most compare, readable, and unambiguous, 
STRUCTURED ENGLISH{  a method of writing process specifications that combines structured programming techniques with narrative english}
Uses brief statement to describe a process very carefully. It looks a bit like programming statement but without references to computer concepts. Rules of structured programming are followed and indentation is used for clarity. 
Structured English is well suited to describing process with many sequential processing steps and relatively simple control logic. Structured English is not well suited for describing processes with the following characteristics.
a. Complex decision logic.
b. Fer sequential processing steps.
Decision logic is complex when multiple decision variables and a large number of possible combinations of those variable ned to be considered.



















Practice Quiz
Practice Quiz Unit_6
1.
The traditional approach to information systems development describes activities as ____.
A) processes carried out by people or computers 
2.
A(n) ____ represents the system response to one event.
A) DFD fragment 
3.
Which of the following data flow diagram (DFD) symbols represents the communication between an external agent and a process while the process is executing?
C) Real-time link 
4.
Which of the following is NOT a technique to document program logic for a process.
B) Decision Matrix 
5.
Analysts sometimes can spot black holes and miracles simply by examining the _________. 
C) data flow diagram 
6.
Which of the following is a table that describes the relationship between processes and the locations in which they are performed?
 C) Activity-location matrix 
7.
Which of the following is a diagram or map that identifies all of the processing locations of a system?
A) Location diagram 
8.
A ____ is a textual description of a data flow’s content and internal structure.
B) data flow definition 
9.
In small development projects, a(n) ____ may be a small loose-leaf notebook or word processing file.
B) data dictionary 
10.
A symbol on a data flow diagram (DFD) that represents an algorithm or procedure by which data inputs are transformed into data outputs is called a(n) ____.
A) process 















No comments:

Post a Comment