Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc




Скачать 404.5 Kb.
НазваниеCompiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc
страница2/9
Дата26.09.2012
Размер404.5 Kb.
ТипДокументы
1   2   3   4   5   6   7   8   9

Current Status in Contemporary Organizations


Very few organizations, large or small, have a well-defined enterprise data strategy. If asked some will point you to dusty and outdated volumes of data base standards, usually geared specifically to their Relational Data Base Management System (RDBMS). The more advanced organizations will have a subset of standards and perhaps a documented strategy on portions of what should be included in an overall strategy. In most organizations, the value of data is not well understood. Data is considered the province of the department that creates it and this data is often jealously guarded by that department.


Data is usually addressed on a piecemeal basis. A company will launch an effort to choose its preferred RDBMS or will attack a database performance problem when response time becomes excessive. Rarely do organizations work from the big picture and as a result suboptimize solutions, introduce programs which may have an deleterious effect on the overall enterprise, cause inconsistencies that result in major efforts for interfacing or develop systems that can not be easily integrated

Why an enterprise data strategy is needed


Not having an enterprise data strategy is analogous to a company allowing each department and each person within each department to develop their own chart of accounts. The empowerment would allow each person in the organization to choose their own numbering scheme. Existing charts of accounts would be ignored as each person exercised his or her own creativity. Even to those of us who don’t wear green eye shades, the resulting chaos is obvious.


The chaos without an enterprise data strategy is not as obvious, but the indicators abound: dirty data, redundant data, inconsistent data, and users who are becoming increasingly dissatisfied with the performance of IT. Without an enterprise data strategy, the people within the organization have no guidelines for making decisions that are absolutely crucial to the success of the IT organization. In addition, the absence of a strategy gives a blank check to those who want to pursue their own agendas. This includes those who want to try new database management systems, new technologies (often unproved), and new tools. This type of environment provides no checks for those who might be pursuing a strategy that has no hope for success.


An enterprise data strategy should result in the development of systems with less risk and a higher success rate. It should also result in much higher quality systems. An enterprise data strategy provides a CIO with a rationale to counter arguments for immature technology, and data strategies which are inconsistent with existing strategies.

Vision


The vision of an enterprise data strategy that fits your organization has to conform to the overall strategy of IT which in turn must conform to the strategy of the business. The vision should conform to and support where the organization would want to be in five years.


Enterprise Messaging Modeling and Data Integration

Bringing Data Skills to Service Oriented Architecture


Dave McComb

President

Semantic Arts


This was a highly interactive and participatory session, where the audience worked their way through a “makeover” from data modelers to message modelers. The tutorial combined elements of why enterprises are moving to Service Oriented Architectures, and using Messages to achieve integration. Through a series of structured exercises the participants were able to experience the impact of splitting existing systems into functional partitions, and shared services and then re-integrate them with messages. The session also contained a methodology for doing this at an enterprise level and guidelines for achieving louse coupling between applications and services.


Enterprise Metadata: The Art and Science of Best Practices


R. Todd Stephens

Director

BellSouth


Enterprise metadata is moving to the forefront of most major organizations and we must ensure that we succedd at implementation and integration through out the organization. Todd discussed and provided a five year learning curve for the attendies of this tutorial. The session reviewed a metadata framework that covered everything from the raw material of metadata, products, processes, customer service and the metadata experience. Other items covered included:


  • Enterprise Metadata Environment

  • The Architecture of an Enterprise Metadata effort

  • The Project and Implementation side of an enterprise effort

  • The importance of Usability in Metadata

  • Technical Architecture of the Repository Collection

  • The Principle of Success around the service side of delivery

  • Key Leasons Learned


Metadata and the principles that define this technology must be expanded into the other areas of the enterprise environment. While data was the key interest at this conference, we cannot ignore the entire collection of assets in the corporation.


Building the Managed Meta Data Environment


David Marco

President

Enterprise Warehousing Solutions, Inc.


The tutorial "Building the Managed Meta Data Environment (MME)" covered a great many topics. During the first quarter of class the fundamental of meta data were discussed. Including technical meta data, business meta data, and a very detailed discussion on the return on investment that a MME provides. Over a dozen MME use examples were presented to the audience.


The second and third quarters of the tutorial were focused on the managed meta data environment and its 6 key components: Meta Data Sourcing Layer, Meta Data Integration Layer, Meta Data Repository, Meta Data Management Layer, Meta Data Marts, and the Meta Data Delivery Layer. Each of these six components were discussed in great detail. In addition, several implementation best practices were illustrated. The last quarter of the seminar focused on the various tool vendors in the meta data management space. Six vendor tools were discussed in detail and all of the meta data tool questions from the audience were fielded. The tutorial ended with two real-world MME case studies.


XML Database and Metadata Strategies


James Bean

Chairman and Co-founder

Global Web Architecture Group


Determining the most effective strategy for persisting (storing) XML in a relational or object-relational database requires careful research and evaluation. A "best practices" approach incorporates activities and techniques that are analogous to those used for more traditional data architecture efforts.


Business and data requirements, constraints, and capabilities can be used to guide selection of the best XML database storage strategy. The most common of which include:

Store the entire XML document as a complete entity

Decompose (e.g. "shred") the XML document and store separate data values in tables and columns

Store a combination of XML document fragments and separate data values

Store the XML document external to the database, but link if possible


Further, there are a number of complexities and challenges to also be addressed (i.e. character encoding, data type disparity, schema type support, etc.)


Mastering Reference Data


Malcolm Chisholm

President

Askget.com Inc


This presentation underscored reference data as a special class of data in an enterprise information architecture – a class that has its own special behavior and its own special management needs. Data Administrators have to understand what these needs are and create an organization and tool set that enables reference data to be managed effectively. Failure to do so leads to data quality problems, severe limitations on the ability to share data, and the incorrect implementation of business rules. Practical advice was given on how to achieve good reference data management, and the session was marked by a lively interaction that demonstrated the relevance of this topic to the attendees.


Data and Databases


Joe Celko

Professor and VP of Relational Database Management Systems

Northface University


This technical class examine scales, measurements and how to encode data for a database. Part two of the class examined how to apply the theory and techniques of part one in an SQL database environment.

NIGHT SCHOOL

Monday, May 3

5:00 pm – 6:00 pm


One Purdue Information Factory


Brad Skiles

Director, Data Services and Administration

Purdue University


Purdue is preparing to migrate from diverse legacy applications, some of which are over 30 years old, to an ERP environment. Prior to this move, Purdue will be implementing a new data architecture called the One Purdue Information Factory (OPIF). This presentation presented how the OPIF attempts to imbed quality in Purdue data through the practical application of these three components. Specifically, Purdue's plan to:

implement a scalable and integrated architecture based upon the Bill Inmon information factory architecture

capitalize on a robust metadata engine

assess and correct data content in the legacy systems

implement a state-of-the-art business intelligence tool suite


Bare-handed Data Modeling: A Polemic


John Schley

Senior Data Administrator

Principal Residential Mortgage, Inc.


While we data people feel that our role and deliverables are very valuable for the companies we work for, there are a lot of people on the business and technical side that do not agree. They are not all idiots.


This presenter began by listing the three main deliverables of a data modeling effort -- data models, business definitions, and DDL. He detailed how each can be nearly worthless to our business and technical partners.


Then the presenter listed 19 ways to refocus on the real value of data modeling--not as a competency in itself but as a way to introduce the field of data management to the enterprise. In short, we need to more fully support the business by reaching up and across the Zachman Framework to address real issues.


What is Why? What is How? And What is What?

Distinguishing Business Rules From Data And Processes


Tom Yanchek

Manager-Corporate Repository/Architecture

United Parcel Service


Distinguishing a business rule from the data it supports or a process it enforces can be tricky and often times confusing. This presentation discussed:

  • How to overcome common roadblocks when identifying business rules

  • How to separate data, process and rules without losing their dependencies

  • How the importance of employing an organized Business Rule Strategy helps build better enterprises

  • How the importance of employing an organized Business Rule Strategy helps enhance or possibly consolidate existing portions of an enterprise

  • Tips and Techniques for identifying a business process, a business rule or business data

  • How to measure the success of the approach and how to identify, employ and implement quality improvement procedures

  • How to minimize or significantly reduce development costs and maintenance costs – in some cases 25-35%

  • How to maintain business processes, a business rules and business data for verification, acceptance and reasonability



Meta Data Starts with an Enterprise Architecture Data Model


John Sharp

Principal Consultant

Sharp Informatics


Collecting, analyzing and reviewing your organization’s information status and future goals are required for successfully implementing an Enterprise Architecture. This presentation established the rules to make your Enterprise Architecture project successful. The basic model contains major objects such as principles, technology, standards, business drivers, processes and applications. The relations between these objects can be expressed as simple sentences. A simple procedure was shown for extending the EA Data Model to include additional knowledge by converting simple true statements into valid fact types. Examples of improvements in the quality and quantity of collected data using the application created from an EA Data Model were presented.


Data Management Using Metadata Registries


Judith Newton

Computer Specialist

NIST


The adoption of XML as the information interchange format for the Web presents a set of challenges and opportunities for data managers. While XML makes it easy to describe the format of information objects and the relationships among them, it has limited provisions for conveying the semantic content of the objects. Metadata registries based on the ISO/IEC standard 11179 supplement XML schema descriptions and all other data applications with descriptive and relational metadata, allowing all representations of information to be mapped together in a single resource. The new version of 11179 introduces a Metamodel that allows incorporation of conceptual and logical level objects along with multiple representations of these objects. This makes it a vessel uniquely suited to store metadata for multiple applications. Principles developed for the establishment of standardized names through naming conventions can be used to administer namespaces and the objects contained in namespaces.


Integrating Unstructured Data


Michael Reed

Principal Consultant

Enterprise Warehousing Solutions, Inc.


This night school class covered the concepts of unstructured data and the issues surrounding integration with more structured (e.g. database) data. Entity Extraction tools as a means of extracting tagged information from documents and merging it in existing structured tables was discussed. The class also concentrated on the application of accepted industry standards and protocols to unstructured metadata. Standards covered included Dublin Core, ISO 11179, the Resource Description Framework (RDF), and the Java Metadata Interface (JMI).


Also discussed were concepts around classifications, including taxonomies and ontologies. A poll of the class showed that, although most firms were interested in developing classification schemes, almost none had yet begun to do so. The summary discussion emphasized the point that in order to get a true view of "all that we know", both structured and unstructured data must be collated, classified, and made available to end-users.


Conducting Database Design Project Meetings


Gordon Everest

Professor

University of Minnesota


Dr. Everest provided a synthesis of his experiences in gathering database design requirements through interviews and conducting database design project meetings. Reflecting on the lessons learned, he offered a set of ideas and best practices which should be considered by anyone attempting to lead a database design project, including.

  • Interviews vs. facilitated group sessions

  • Picking the users to interview or invite to the table.

  • Preparations before the interviews or before the meetings

  • Accelerated (one meeting possibly over several days, e.g., JAD) vs. extended series of meetings

  • Findings of an experiment which compared the two approaches to group sessions



To Laugh or Cry 2004: Further Fallacies in Data Management


Fabian Pascal

Analyst, Editor & Publisher

Database Debunkings


A lot of what is being said, written, or done in the information management field by vendors, the trade press and "experts" is increasingly confused, irrelevant, misleading, or outright wrong. The problems are so acute that, claims to the contrary notwithstanding, knowledge, practices and technology are actually regressing! This presentation exposed some of the persistent misconceptions prevalent in the information/data management field and their costly practical consequences.

SPECIAL INTEREST GROUPS

Tuesday, May 4

7:15 am – 8:15 am


Automating Conversions Between Metadata Standards Using XML and XSLT


Kara Nance

Professor of Computer Science

University of Alaska Fairbanks


Brian Hay

Computer Science

University of Alaska Fairbanks


Many data system designers and organizations are faced with a dilemma when deciding which metadata standard(s) to apply when designing data systems. In some cases, datasets don’t easily fit into the scope of current defined domains, but might in the future as standards for new domains are developed. The increased emphasis on new metadata standards coupled with the user demand for seamless data portals, creates conflict for many data system designers, often delaying production by lengthening the design phase of the system development life cycle. This session presented a process through which the conflict can be mediated and design delays minimized. XSLT and XML can be used to automatically transform datasets between metadata standards to create a single data format which can then be manipulated as a single dataset.


KEYNOTE PRESENTATION

Tuesday, May 4

8:45 am – 9:45 am


Database Graffitti: Scribbles from the Askew Wall


Chris Date

Independent Author, Lecturer, Researcher and Consultant


This presentation was based in part Chris Date's writings and columns over many years. It consisted of a series of quotations, aphorisms, and anecdotes - seasoned with a fair degree of personal commentary - that were directly or indirectly relevant to the subject of database management. The presentation was not technically deep, but several serious messages lay just beneath the surface. Topics included:

- Objects and objections

- Normalization, networks and nulls

- The role of simplicity

- The joy of self-reference

- Relational misconceptions

- Some good quotes

- Books and book reviews


CONFERENCE SESSIONS

Tuesday, April 29

10:15 am – 11:15 am


Entropy, Energy, and Data Resource Quality


Michael Brackett

Consulting Data Architect

Data Resource Design & Remodeling


Data quality is becoming increasingly important for every public and private sector organization, particularly as organizations are being held accountable for the quality of their data. However, there are many different perspectives of data quality ranging from developing databases to integrating data to business information. There are also many perceptions about what quality is and the cost of Quality. This presentation steps back and takes a basic look at quality from the standpoint of physics - entropy and energy. It covers basic topics about:

- How data go bad

- What it takes to maintain data good

- The costs for collecting good data

- The costs of integrating disparate data

- Is data quality really free

- When is quality good enough


Adding Value in a Changing Environment - A Case Study


Duncan McMillan

Enterprise Architect (Information)

Westpac Banking Corporation (NZ)


Paul Brady
1   2   3   4   5   6   7   8   9

Похожие:

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconThis report compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconGeneral Chair Yogendra K. Joshi Georgia Institute of Technology, usa program Chair Sandeep Tonapi Anveshak Technology and Knowledge Solutions, usa chair, Thermal Management

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconOpening Ceremony and Committees Address by General Chair and Program Chair

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconWseas and naun conferences Joint Program

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconAssociated Press – 1/21 isu space food program closes Tony Pometto Faculty/research

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconWarren L g koontz (Professor and Program Chair)

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconNelson is president, board member and annual congress program co- chair of the International Management Development Association imda

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconIpo chair, nipde general Manager, us tag chair

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconPygmalion by George Bernard Shaw

Compiled and edited by Tony Shaw, Program Chair, Wilshire Conferences, Inc iconShaw College, The Chinese University of Hong Kong

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница