Department of computer science and information systems

Скачать 114.84 Kb.
НазваниеDepartment of computer science and information systems
Размер114.84 Kb.
  1   2   3




term paper: Object Oriented Database and Object Relational Databases


July 22, 2004



PART 1 CHAPTER 24………………………………….………2


Computer Aided Design (CAD)

Computer Aided Manufacturing (CAM)

Network Management System

Office information System (ois) and Multimedia systems

Digital Publishing

Geographic information Systems (GIS)

Interactive and Dynamic Web sites

1.2 WEAKNESSES OF RDBMSs…………………………...………..3

1.3 OBJECT-ORIENTED CONCEPTS…………………..…………..3

Abstraction, Encapsulation, Information hiding

Objects and Attributes


Polymorphism and Dynamic Binding


Mapping Classes to Relations

Accessing Objects in the Relational Database


Semantic Data Model [Hammer and McLeod, 1981]

Functional Data Model [Shipman, 1981]

Semantic Association Model [Su, 1983]

PART 2 CHAPTER.…………….……………………………….6

OBJECT-ORIENTED DBMSs………………………………….6

    1. CONCEPTS AND DESIGN………………………………………6

Introduction to OO Data Models and OODBMSs

Persistent programming Languages

Data programming languages

Alternative Strategies for Developing an OODBMS

  • Extend an existing Object-Oriented programming language with database capabilities

  • Provide extensible object-oriented DBMS libraries

  • Embed Object-Oriented Database language constructs in a conventional host language

  • Extend an existing database language with Object-Oriented capabilities

  • Develop a novel database data model/data language

    1. OODBMS PERSPECTIVES……………………………………...6

Point Swizzling techniques

Accessing an Object

    1. PERSISTENCE………………………………………………….7

Persistence schemes



Explicit paging

Orthogonal persistence

Persistence independence

Data type Orthogonality

Transitive persistence

Advantages and Disadvantages of Orthogonal Persistence

What Objects do queries apply to?

What objects are part of transaction semantics?

    1. ISSUES IN OODBMS…………………………………………..11



Schema evolution


  • Clint-Server

-object server

-page server

-Database server

- Storing and executing methods

- It eliminates redundant code

- It simplifies modifications

- Methods are more secure

- Improve integrity


  • Wisconsin benchmark

  • TPC-A and TPC-B benchmarks

  • TPC-C benchmark

  • 001 benchmark

  • 007 benchmark




Comparison of Object-Oriented data modeling and Conceptual Data Modeling

Relationships and Referential Integrity

Behavioral Design

PART 3 CHAPTER 26……………………………………….16




Common object request Broker Architecture

1.2 OBJECT DATA STANDARD ODMG 3.0, 1999.............................16

Object data Management Group

Object Model

Object Definition language

Object Query language

Other parts of the ODMG Standard

  • Object Interchange Format

  • ODMG language bindings

1.3 OBJECTSTORE.........................................................................17


Data Definition in ObjectStore

Data Manipulation in ObjectStore

PART 4 CHAPTER 27………………………………………18




The Third Manifesto…………………………………………………18

1.3 POSTGRES – AN EARLY ORDBMS……………………..…….19

1.4 SQL3………………………………………………………….20

1.5 QUERY PROCESSING AND OPTIMIZATION...........................21





Introduction to object DBMS

Object –orientation is an approach to software construction that shows considerable promise for solving some of the classic problems of software development. The underlying concept is that all software should be constructed out of standard, reusable components wherever possible. The emergences of Object-Relational DBMS and object-Relational DBMS have been combined to allow the concurrent modeling of both data and the processes acting upon the data.


Computer Aided Design (CAD)

A CAD database stores data relating to mechanical and electrical design covering, for example, buildings, aircraft, and integrated circuit chips. Designs of this type have some common characteristics

Computer Aided Manufacturing (CAM)

Computer Aided Manufacturing (CAM) is the use of computers to assist the manufacturing process. CAD and CAM are combined CAD/CAM so that the output of the CAD module is fed to the CAM system. The design can be converted into a sequence of processes (drilling, turning, milling etc) for manufacture on a Numerically Controlled (NC) milling machine, for example.

Computer-Aided Software Engineering (CASE)

To speed up the software system building process, a new concept of designing software was introduced in the '70s, called Computer Aided Software Engineering (CASE). This term is used for a new generation of tools that applies rigorous engineering principles to the development and analysis of software specifications. Simply, computers develop software for other computers in a fast way by using specific tools

Network Management System

Enables network administrators to identify and resolve problems and performance bottlenecks before they impact network services. ONMS is essential for maintaining user quality of experience in multi-cast video, IP Telephony, and other business-critical applications

Office information System (OIS) and Multimedia systems

An OIS database stores data relating to the computer control of information in a business, including electronic mail, documents, and invoices and so on.

Digital Publishing

Digital publishing is the digitization of the professional publishing process, coupled with the commerce and distribution powers of the Internet. Digital publishing takes place after the copy is written and includes the digital preparation and automated production, delivery and distribution of your content. By harnessing powerful XML technology, each of these functions is tied together through a common infrastructure to facilitate the sharing of data and process information between and among them. The result is a Web-based publishing solution that can economically produce one or thousands of copies, as they are needed, through an elegant publishing platform, with zero inventory.

Geographic information Systems (GIS)

A GIS combines layers of information about a place to give you a better understanding of that place. What layers of information you combine depends on your purpose—finding the best location for a new store, analyzing environmental damage, viewing similar crimes in a city to detect a pattern, and so on.

Interactive and Dynamic Web sites

Web sites that allows us to do business online, with interactive displays and database interactivity.


  • Representation of ‘real world’ entities: The process of normalization generally leads to the creation of relations that do not correspond to entities in the ‘real world’.

  • Semantic overloading: The relational model has only one construct for representing data and data relationships: the relation.

  • Homogeneous data: The relational model assumes both horizontal and vertical homogeneity. Also, intersection of a row and column must be an atomic value => this structure is restrictive for many ‘real world’ objects with a complex structure.

  • Limited operations: The relational model has a fixed set of operations (provided in SQL). => does not allow new operations to be specified.

  • Recursive queries: It is extremely difficult to produce recursive queries (queries about relationships that a relation has with itself).

  • Impedance mismatch: Result of mixing different programming paradigms (e.g., SQL is a declarative language that handles rows of data whereas a high-level language such as ‘C’ is a procedural language that can handle only one row at a time).


Abstraction is the Process of picking out (abstracting) common features of objects and procedures. A programmer would use abstraction, for example, to note that two functions perform almost the same task and can be combined into a single function. Abstraction is one of the most important techniques in software engineering

Encapsulation is the process of combining elements to create a new entity. For example, a procedure is a type of encapsulation because it combines a series of computer instructions. Likewise, a complex data type, such as a record or class, relies on encapsulation. Object-oriented programming languages rely heavily on encapsulation to create high-level objects

Information hiding the process of hiding details of an object or function. Information hiding is a powerful programming technique because it reduces complexity. One of the chief mechanisms for hiding information is encapsulation -- combining elements to create a larger entity. The programmer can then focus on the new object without worrying about the hidden details. In a sense, the entire hierarchy of programming languages -- from machine languages to high-level languages -- can be seen as a form of information hiding.

Information hiding is also used to prevent programmers from changing --- intentionally or unintentionally -- parts of a program

Attributes (instance variables) describe the current state of an object

Objects are the physical and conceptual things we find in the universe around us. Hardware, software, documents, human beings, and even concepts are all examples of objects.

Aggregation is either: the process of creating a new object from two or more other objects, or an object that is composed of two or more other objects.

A monolithic object is an object that has no externally-discernible structure. Said another way, a monolithic object does not appear to have been constructed from two or more other objects. Specifically, a monolithic object can only be treated as a cohesive whole

Composite objects are objects that have an externally-discernible structure, and the structure can be addressed via the public interface of the composite object. The objects that comprise a composite object are referred to as component objects

The state of an object is the condition of the object, or a set of circumstances describing the object


A class is a thing that consists of both a pattern and a mechanism for creating items based on that pattern. This is the "class as an `instance factory'" view; instances are the individual items that are "manufactured" (created) using the class's creation mechanism.

A metaclass is a class whose instances themselves are classes

A parameterized class is a template for a class wherein specific items have been identified as being required to create non-parameterized classes based on the template

The term polymorphism, originates from the Greek word 'poly morph' meaning many forms. Polymorphism concept allows different objects to react to the same stimuli (i.e. message) differently [Hymes, 1995Polymorphism and Dynamic Binding]

Inheritance allows one class to be defined as a special case of a more general class. These special cases are known as subclasses and the more general cases are known as super classes. The process of forming a superclass is referred to as generalization; forming a subclass is specialization. A subclass inherits all the properties of its superclass and additionally defines its own unique properties (attributes and methods).

Overloading allows the name of the method to be reused within a class definition or across definitions.

Overriding is a special case of overloading, allows the name of a property to be redefined in a subclass.

Dynamic binding allows the determination of an object‘s type and methods to be deferred until runtime


Mapping Classes to Relations

Accessing Objects in the Relational Database


Semantic Data Model [Hammer and McLeod, 1981]

Functional Data Model [Shipman, 1981]

Semantic Association Model [Su, 1983]




Introduction to OO Data Models and OODBMSs

Object-Oriented Data Models is a logical data model that captures the semantics of object supported in object-oriented programming.

Object-Oriented Database is a persistent and sharable collection of objects defined by an OODM

OODBMS is the manager of an OODB

Persistent programming Languages

This is a language that provides its users with the ability to (transparently) preserve data across successive executions of a program, and even allows such data to be used by many different programs

Data programming languages

This is a language that integrates some ideas from the database programming model with traditional programming language features

Alternative Strategies for Developing an OODBMS

  • Extend an existing Object-Oriented programming language with database capabilities

  • Provide extensible object-oriented DBMS libraries

  • Embed Object-Oriented Database language constructs in a conventional host language

  • Extend an existing database language with Object-Oriented capabilities

  • Develop a novel database data model/data language


The object-oriented database management system (OODBMS) has gained considerable popularity in the last few years due to its flexibility and compatibility

Point Sizzling techniques

Pointer Sizzling. The action of converting object identifiers to main memory pointers and back again. The aim of point sizzling is to optimize access to objects. There are various techniques used.

  1. No sizzling

The easiest implementation of faulting objects into and out of memory is to do any sizzling at all. In this case objects are faulted into memory by the underlying object manager, and a handle is passed back to the application containing the objects OID.

  1. Object Referencing

To be able to swizzle a persistent object’s OID to virtual memory pointer, a mechanism is required to distinguish between resident and non-resident objects. Most techniques are variations of either edge marking or node marking.

  1. Hardware-based schemes

Hardware based swizzling uses virtual memory access protection violations to detect accesses of non-resident objects. These schemes are the standard virtual memory hardware to trigger the transfer of persistent data from disk to main memory

Classification of pointer swizzling

Pointer swizzling techniques can be classified according to the following three dimensions

  1. copy versus in-place swizzling

When faulting objects in, the data can either be copied into the application‘s local object cache or it can be accessed in place within the object manager’s database cache. Copy swizzling may be more efficient as, in the worst case; only modified objects have to be swizzled back to their OIDs, whereas an in-place technique may have to unswizzle an entire page of objects if one object on the page is modified. On the other hand, with the copy approach, every object must be explicitly copied into the local object cache.

  1. Eager Versus Lazy Swizzling

Moss and Elliot [1990] defined eager swizzling as the swizzling of all OIDs for persistent objects on all data pages used by the application before any object can be accessed.

  1. Direct versus indirect swizzling

This is an issue only when it is possible for a swizzled pointer to refer to an object that is no longer in virtual memory. With direct swizzling, the virtual memory pointer of the referenced object is placed directly in the swizzled pointer; with indirect swizzling, the virtual memory pointer is placed in an intermediate object, which acts as a placeholder for the actual object.

Accessing an Object

How an object is accessed on secondary storage is another important aspect that can have a significant impact on OODBMS performance. Now, If we look at the approach taken in a conventional relational DBMS with a two-level storage model. The following steps apply

  • The DBMS determines the page on secondary storage that contains the required record using indexes or table scans, as appropriate. The DBMS then reads that page from secondary storage and copies its cache.

  • The DBMS subsequently transfers the required parts of the record from the cache into the applications memory space. Conversions may be necessary to convert the SQL data types into the applications data types

  • The application can then update the record’s fields in its own memory space.

  • The application transfers the modified fields back to the DBMS cache using SQL, again requiring conversions between data types.

  • Finally, at an appropriate point the DBMS writes the updated page of the cache back to the secondary storage.


Since most application programs need to deal with persistent data, adding persistence to objects is essential to making object-oriented applications useful in practice. There are three classes of solutions for implementing persistence in object-oriented applications: the gateway-based object persistence approach, which involves adding object-oriented programming access to persistent data stored using traditional non-object-oriented data stores, the object-relational database management system (DBMS) approach, which involves enhancing the extremely popular relational data model by adding object-oriented modeling features, and the object-oriented DBMS approach (also called the persistent programming language approach), which involves adding persistence support to objects in an object-oriented programming language

Persistence schemes

Database Management Systems Classical database management systems (DBMS) support persistent (long term) data as quite distinct from transient (short-term) data. Long term data is described in a schema or data definition language (DDL) and is manipulated using a query or data manipulation language (DML). An application system typically consists of application programs written in a general purpose programming language, together with the DDL schemas for the database. The application programs typically access the database by queries expressed as embedded DML statements. In 4GL's, the general purpose language is replaced by a domain specific language with the DBMS's DDL and DML as partially integrated sublanguages.

The key problem with the classical DBMS approach is that the DDL defines a type/value space that is distinct from, and not directly compatible with the type/value space defined by the host programming language. Bridging the gap between the two spaces typically adds complexity to the application system design and requires extra application code that would be absent if the long term data structures were entirely memory resident. In some application domains, typical database schema and query languages are simply not suitable for describing and managing the data structures.

Check pointing. Some systems implement persistence by copying part or all of a program's address space to disc. In cases where the entire address space is saved, the computation can be restarted from the checkpoint. In other cases, only the contents of the program's heap are saved. Checkpoint systems tend to have two major problems. First, a checkpoint or saved heap file can typically only be used by the application program that created it. Even a small change to the application can lead to incompatibilities which render the checkpoint useless. Second, a typical checkpoint contains a large amount of information that is of no use in later executions.

Data Structure Copying There are many examples of persistence mechanisms that work by copying the closure of a data structure to an external medium [27, 38]. The application typically invokes a write operation on a data value which traverses the graph of objects reachable from the value, and writes it to disc in a flattened form. At a later stage, the flattened data structure can be read in, giving a new copy of the data structure. Depending on the implementation of the mechanism, the application may need to provide hooks to assist in the data structure traversal process; e.g. [43]. This style of persistence is often called pickling, or in the distributed computing context, marshalling.

Persistence by data structure copying has two inherent problems. First, it does not preserve object identity; e.g. if two graphs that share a common subgraph are separately copied to disc and then restored, the subgraph will no longer be shared in the restored copies. Second, data structure copying is not incremental, and is therefore not an efficient way to save small changes to large data structures.

Explicit Data Structure Paging Some persistence mechanisms work by "paging" objects between the application's heap and a persistent database [29, 42]. Object pointers typically exist in two forms; machine addresses and persistent identifiers (PIDs). When an application wants to access an object referred to by a PID, it makes an explicit call to the persistence manager to obtain the corresponding address. If the object in question is not in memory, the persistence manager reads it in from disc and records the object's PID to address mapping. Finally, the object's machine address is returned to the caller. In effect, objects are "demand paged" into memory.

There are two common schemes for creating and updating persistent objects. The first scheme requires the caller to allocate the object using a persistent heap allocator. In the absence of pervasive garbage collection, an object will exist in the persistent store until it is explicitly free'ed by the application. This leads inevitably to storage leak and dangling pointer problems.

The second scheme makes objects persistent depending on their reachability from the "root" of the persistent store. This presupposes that the persistence manager has the structural information needed to trace the reachable nodes, but it also allows automatic garbage collection of the persistent store.

Apart from the storage management issue already noted, the main problem with explicit data structure paging is that the application needs to handle the two different kinds of object pointer. This is a burden for the programmer, and reduces the reliability and maintainability of applications. These problems are avoided if the persistence mechanism is fully integrated with the application programming language.

Orthogonal Persistence

The most sophisticated form of persistence is known as orthogonal persistence. In an orthogonal persistence mechanism, the lifetime of a first-class data value is independent of other static and dynamic properties of the value. A persistent value does not have a special type, and in created or used by the application in the same way as a non-persistent value. Loading and saving of values does not alter their semantics, and the process is transparent to the application program. If the base language supports first-class function values and task values, these should persist along with other values.

Most examples of orthogonal persistence use "root" reachability to determine the lifetime of objects. The persistent store contains a distinguished object called the "root" object. When the store is stabilized, the persistence mechanism finds all objects that can be reached from the root object and ensures that their current values are saved to the persistent store. A subsequent program execution can access persistent values by first calling a built-in function to obtain the store's root object, and then traversing a path from the root to the required values.

Orthogonal persistence's main advantages over other methods of managing long-term data are as follows:

  • there is no need to define long-term data in a separate schema language,

  • no special application code is required to access or update persistent data, and

  • There is no limit to the complexity of the data structures that can be made persistent.

This makes languages that support orthogonal persistence particularly good for applications that have to maintain complex long-term state.

Unfortunately, current implementations of orthogonal persistence have some important drawbacks:

  • Current generation persistence technology is inefficient compared with more mature languages and data management systems. Few, if any, compilers for persistent languages generate native machine code. Persistent stores do not support bulk data well, and their performance tends to degrade due to data locality effects.

  • Most current generation persistent stores are only serially shareable.

  • Current generation persistence systems do not support concurrency or distribution.

  • Current generation persistence systems do not interface well with the non-persistent world; e.g. access to traditional database systems is typically not supported.

We can expect most, if not all, of these problems to be largely solved in the next 5 to 10 years. Current research into orthogonal persistence includes the following:

  • Compilers: production of native code compilers is largely a matter of resources.

  • Concurrency and distribution support: initial work has been done on concurrency and distribution in persistent languages [35, 36, 54], though some important semantic problems are still to be solved. Current effort is mainly focused on implementation issues [52, 53].

  • Persistent stores: developments in persistent store technology should give better support for bulk data.

  • Persistent programming environments: current research into configuration management of persistent programs and novel binding schemes [24] could revolutionize persistent programming environments.

  • Meta-programming: current work on linguistic [44] and other forms of reflection [24] should lead to better application support for meta-programming.

Finally, there has been a tendency in the persistence research community not to make the tools available outside of restricted circles. We believe that this has inhibited both research into orthogonal persistence and the use of persistence technology for applications programming. This has tended to limit orthogonal persistence's visibility in the wider software engineering community.


Three issues are defined in this document.

  • Long-duration transactions

  • Versions

  • Schema evolution


A transaction is a logical unit of work, which should always transform the database from one consistent state to another. The type of transaction found in business application is typically of short duration. In contrast, transactions involving complex objects, such as those found in engineering and design applications, can continue for several days. Clearly to support long –duration transactions we need to use different protocols from those used for traditional database applications in which transactions are typically of a very short duration.


The process of maintaining the evolution of objects is known as Version Management. An Object Version represents an identifiable state of an object; a version history represents the evolution of an object. Versioning should allow changes to the properties of objects to be managed in such a way that object references always point to the correct version of an object.

Schema evolution

Design is an incremental process and evolves with time. To support this process, applications require considerable flexibility in dynamically defining and modifying the database schema. For example, it should be possible to modify class definitions, the inheritance structure, and the specifications of attributes and methods without requiring system shutdown.


Here are going to discuss two architectural issues: how best to apply the client-server architecture to the OODBMS environment, and the storage of methods.


The three basic architectures for a client-server DBMS that vary in the functionality assigned to each component.

Object server this method attempts to distribute the processing between the two components.

Page server most of the database processing is performed by the client.

Database server Most of the database processing is performed by the server.

Storing and executing methods

There are two approaches to handling methods: store the methods in external files, and store the methods in the database .The second approach offers several benefits:

- It eliminates redundant code

- It simplifies modifications

- Methods are more secure

- Improve integrity


Over the years, various database benchmarks have been developed as a tool for comparing the performance of DBMS and are frequently referred to in academic, technical, and commercial literature

  • Wisconsin benchmark

  • TPC-A and TPC-B benchmarks

  • TPC-C benchmark

  • 001 benchmark

  • 007 benchmark


The Object-oriented Database System Manifesto proposes 13 mandatory features for an OODBMS, based on two criteria: it should be an object-oriented system and it should be a DBMS. The first eight rules apply to the object –oriented characteristics

1. Complex objects

Thou shalt support complex objects.

Databases based on the Relational Model only consist of tables with tuples and atomic values. The requirement for object-oriented databases is to offer constructors to build user-defined complex objects of any kind. Constructors are: tuples, sets, bags, lists and arrays. All object constructors should be orthogonal; the designer should be able to mix the constructors in any way (a list of sets, an array of bags or a list of lists).

2. Object identity

Thou shalt support object identity.

In relational databases each tuple has a unique set of values called primary key. We can say: the identification of any tuple is based on values of the tuple. It is part of its philosophy not to have any hidden information which can be used to identify a tuple. This situation makes it hard to guarantee integrity of the database. Object-oriented databases should have an object identity, which is independent from the values of the object. In most cases the object identity is a hidden value (and not part of the user-designed data structure).
Example: The DBMS POET uses object identifiers (O-IDs) like the following: (0-772#15).

3. Encapsulation

Thou shalt encapsulate thine objects.

Like abstract data structures, objects shall encapsulate state and behavior (attributes and operations). Objects should be handled by using the public operations. Access to (private) attributes should be impossible. This situation creates a problem concerning the use of ad-hoc query systems: These systems need the direct access to all attributes without using predefined access operations. In this case the encapsulation mechanisms should not be strict.

4. Types and Classes

Thou shalt support types and classes.

Each object-oriented DBMS in the sense of the Manifesto has to be able to produce objects. This means:

  • It could use types as the abstract construction plan for objects of a special kind. Systems as C++ or Turbo-Pascal belong to this category.

  • It could deal with classes while using a so-called object factory and an object warehouse. It is strongly run-time oriented. Systems as Smalltalk or Lisp belong to this category.

5. Class or Type Hierarchies

Thine classes or types shalt inherit from their ancestors.

Classes or types of the database structure could be arranged in an inheritance tree structure.
Example: A Person could be a Student or a Member of Staff (or even just a person). Student and Member of Staff are derived classes/types of the basic classes/type Person.

6. Overriding, overloading and late binding

Thou shalt not bind prematurely.

The topics of this chapter cover well-known mechanisms which came up with object-oriented programming:

  • Overriding and overloading of operations (i.e. giving the same name to operations of similar behavior although they are part of different types/classes). The concrete operation doesn't depend on the referred name but on the type/class of the object it is invoked.

  • Late binding of operations (i.e. during run-time the appropriate operation will be selected).

7. Computational completeness

Thou shalt be computationally complete.

A programming language should offer the programmer features to express any algorithm. If so we call the programming language computational complete. SQL for instance is not computational complete.

8. Extensibility

Thou shalt be extensible.

Each database system has a set of predefined (system defined) types. This set can be extended; system defined and user defined types should be treated in the same way.

9. Persistence

Thou shalt remember thy data.

Unnecessary to say, that an object-oriented DBMS should be able to store data. But in the sense of the Manifesto it means, that each object (system or user defined) is allowed to become persistent. And: The process of storing an object should be implicit - no need for an explicit store or move of data.

10. Secondary storage management

Thou shalt manage very large databases.

Each DBMS must offer features for efficient storage management, an object-oriented DBMS as well. This must include index management, data clustering, data buffering, access path selection and query optimization. And: All these feature have to be invisible for the application programmer.

11. Concurrency

Thou shalt accept concurrent users.

Again a feature of each DBMS: each object-oriented DBMS has to offer mechanisms to synchronize access of more than one user at the same time.

12. Recovery

Thou shalt recover from hardware and software failures.

An object-oriented DBMS should provide the usual level of services in case of software or hardware failures.

13. Ad Hoc Query Facility

Thou shalt have a simple way of querying data.

An object-oriented DBMS should provide an Ad Hoc Query Facility which matches the following criteria

  • It should be declarative (WHAT instead of HOW) and of high level.

  • It should be efficient. The Query Facility must contain a query optimizer.

  • It should work on any database structure (even on such based on user defined types/classes).


Advantages of OODBMS

    • More semantic information

    • Support for complex objects

    • Extensibility of data types

    • May improve performance with efficient caching

    • Versioning

    • Reusability

    • Inheritance speeds development and application

    • Potential to integrate DBMSs into single environment

Disadvantages OODBMS

    • Strong opposition from the established RDBMSs

    • Lack of theoretical foundation

    • Throwback to old pointer systems

    • Lack of standard ad hoc query language

    • Lack of business data design and management tools

    • Steep learning curve

    • Low market presence

    • Lack of compatibility between different OODBMSs


Comparison of Object-Oriented data modeling and Conceptual Data Modeling


Object Entity Object includes behavior

Attribute Attribute None

Association Relationship Association are the same but inheritance

in OODM includes both state and


Message No corresponding concept in CDM

Class Entity type/Supertype None

Instance Entity None

Encapsulation No corresponding concept in CDM

Relationships and Referential Integrity

  • Do not allow the user to explicitly delete objects

  • Allow the user to delete objects when they are no longer required.

  • Allow the user to modify and delete objects and relationships when they are no longer required

Behavioral Design

In Object-oriented analysis, the processing requirements are mapped onto a set of methods that are unique for each class. The methods that are visible to the user or to the other objects (public methods) must be distinguished from methods that are purely internal to a class (private methods). There are different types of public and private methods

  • Constructors

  • Access methods

  • Transform methods

  1   2   3


Department of computer science and information systems iconComputer Science and Systems Engineering Specialisation in Information Technologies

Department of computer science and information systems iconDepartment of Computer Science and Information Engineering

Department of computer science and information systems iconDepartment of Computer Science and Information Engineering

Department of computer science and information systems iconИнформационные системы и технологии
М33 нологии = International Congress on Computer Science : Information Systems and Tech
Department of computer science and information systems iconComputer Science, Knowledge & Systems

Department of computer science and information systems iconComputer Science, Knowledge & systems

Department of computer science and information systems iconDepartment of Electrical and Computer Systems Engineering

Department of computer science and information systems iconReferences on computer and information systems security

Department of computer science and information systems iconThe Department of Computer Science 3

Department of computer science and information systems iconDepartment of Computer Science

Разместите кнопку на своём сайте:

База данных защищена авторским правом © 2014
обратиться к администрации
Главная страница