What Is DOM: QN:-Compare DOM and SAX?
What Is DOM: QN:-Compare DOM and SAX?
http://www.w3.org/DOM
DOM by itself is just a specification for a set of interfaces defined by W3C. In fact, the
DOM interfaces are defined independent of any particular programming language. You
can write DOM code in just about any programming language, such as Java,
ECMAScript (a standardized version of JavaScript/JScript), or C++. There are DOM
APIs for each of these languages. W3C uses the Object Management Group’s (OMG)
Interface Definition Language (IDL) to define DOM in a language-neutral way.
Language-specific bindings, or DOM interfaces, exist for these languages. The DOM
specification itself includes bindings for Java and ECMAScript, but third parties have
defined bindings for many other languages.
DOM is not a set of data structures; rather it is an object model describing XML
documents.
• DOM does not specify what information in a document is relevant or how information
should be structured.
• DOM has nothing to do with COM, CORBA, or other technologies that include the
words object model
Why Do I Need DOM?
The main reason for using DOM is to create or modify an XML document programmatically.
You can use DOM just to read an XML document, but as you will see in the next
chapter, SAX is often a better candidate for the read-only case
DOM is not practical for small devices such as PDAs and cellular
phones. With the rapid proliferation of these devices and demand for greater functionality,
XML will very likely play a role in this market.
DOM Levels
The DOM working group works on phases (or levels) of the specification. At the time of
this writing, three levels are in the works. The DOM Level 1 and Level 2 specifications
are W3C recommendations.
DOM Interfaces
Interface Description
What Is SAX?
SAX is an API that can be used to parse XML documents. A parser is a program that
reads data a character at a time and returns manageable pieces of data. For example, aparser for
the English language might break up a document into paragraphs, words, and
punctuation. In the case of XML, the important pieces of data include elements, attributes,
text, and so on. This is what SAX does.
SAX provides a framework for defining event listeners, or handlers. These handlers are
written by developers interested in parsing documents with a known structure. The handlers
are registered with the SAX framework in order to receive events. Events can
include start of document, start of element, end of element, and so on. The handlers contain
a number of methods that will be called in response to these events. Once the handlers
are defined and registered, an input source can be specified and parsing can begin
SAX was originally developed in Java, but similar implementations are available in other
languages as well. There are implementations for Perl, Python, and C++.
If you are writing a tool or a standalone program to process XML, SAX is a good way to
do it.
Some SAX parsers can validate a document against a Document Type Definition (DTD).
Validating parsers can also tell you specifically where validation has failed
SAX is, in many ways, much simpler than DOM. There is no need to model every possible
type of object that can be found in an XML document. This makes the API easy to
understand and easier to use. DOM contains many interfaces, each containing many
methods. SAX is comprised of a handful of classes and interfaces. SAX is a much lowerlevel
API when compared with DOM. For these reasons, SAX parsers tend to be smaller
than DOM implementations. In fact, many DOM implementations use SAX parsers
under the hood to read in XML documents.
SAX is an event-based API. Instead of loading an entire document into memory all at
once, SAX parsers read documents and notify a client program when elements, text,
comments, and other data of interest are found. SAX parsers send you events continuously,
telling you what was found next.
The DOM parses XML in space, whereas SAX parses XML in time. In essence, the
DOM parser hands you an entire document and allows you to traverse it any way you
like. This can take a lot of memory, so SAX can be significantly more efficient for large
documents. In fact, you can process documents larger than available system memory, but
this is not possible with DOM. SAX can also be faster, because you don’t have to wait
for the entire document to be loaded. This is especially valuable when reading data over
a network.
In some cases, you might want to build your own object model of an XML document
because DOM might not describe your specific document efficiently or in the way you
would like. You could solve the problem by loading a document using DOM and translating
the DOM object model into your own object model. However, this can be very inefficient,
so SAX is often a better solution.
Disadvantages
SAX is not a perfect solution for all problems. For instance, it can be a bit harder to visualize
compared to DOM because it is an event-driven model. SAX parsing is “single
pass,” so you can’t back up to an earlier part of the document any more than you can
back up from a serial data stream. Moreover, you have no random access at all. Handling
parent/child relationships can be more challenging as well.
Another disadvantage is that the current SAX implementations are read-only parsers.
They do not provide the ability to manipulate a document or its structure (this feature
may be added in the future). DOM is the way to go if you want to manipulate a document
in memory.
There is no formal specification for SAX. The interfaces and behavior are defined through
existing code bases. This means there is no way to validate a SAX parser or to determine
whether it works correctly. In the words of Dave Megginson, “It’s more like English
Common Law rather than the heavily codified Civil Code of ISO or W3C specifications.”
Even considering these limitations, SAX does its job well. It’s lightweight, simple, and
easy to use. If all you want to do is read XML, SAX will probably do what you need.
QN:-2
Devlop a code for creating a “dom ”
document object for searching an “xml”
element in xml document?
In this example, we will create an XML document in memory, from scratch, and
thenwrite it out to disk.
There are two classes, The first one, IteratorApp.java, contains the
application code. The second one, NameNodeFilter.java, selects nodes with a
given name.
IteratorApp.java
package com.madhu.xml;
import java.io.*;
import org.w3c.dom.*;
import org.w3c.dom.traversal.*;
import javax.xml.parsers.*;
public class IteratorApp {
protected DocumentBuilder docBuilder;
protected Document document;
protected Element root;
public IteratorApp() throws Exception {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
docBuilder = dbf.newDocumentBuilder();
DOMImplementation domImp = docBuilder.getDOMImplementation();
if (domImp.hasFeature(“Traversal”, “2.0”)) {
System.out.println(“Parser supports Traversal”);
}
}
public void parse(String fileName) throws Exception {
document = docBuilder.parse(new FileInputStream(fileName));
root = document.getDocumentElement();
System.out.println(“Root element is “ + root.getNodeName());
}
public void iterate() {
NodeIterator iter =
((DocumentTraversal)document).createNodeIterator(
root, NodeFilter.SHOW_ELEMENT,
new NameNodeFilter(“book”), true);
Node n = iter.nextNode();
while (n != null) {
System.out.println(n.getFirstChild().getNodeValue());
n = iter.nextNode();
}
}
public static void main(String args[]) throws Exception {
IteratorApp ia = new IteratorApp();
ia.parse(args[0]);
ia.iterate();
}
}
NameNodeFilter.java
package com.madhu.xml;
import org.w3c.dom.*;
import org.w3c.dom.traversal.*;
public class NameNodeFilter implements NodeFilter {
protected String name;
public NameNodeFilter(String inName) {
name = inName;
}
public short acceptNode(Node n) {
if (n.getNodeName().equals(name)) {
return FILTER_ACCEPT;
} else {
return FILTER_REJECT;
}
}
}
Since XML is inherently sequential and objects are (usually) not, XML data binding
mappings often have difficulty preserving all the information in an XML document.
Specifically, information likecomments, XML entity references, and sibling order may
fail to be preserved in the object representation created by the binding application. This
is not always the case; sufficiently complex data binders are capable of preserving
100% of the information in an XML document.
Similarly, since objects in computer memory are not inherently sequential, and may
include links to other objects (including self-referential links), XML data binding
mappings often have difficulty preserving all the information about an object when it is
marshalled to XML.
An alternative approach to automatic data binding relies instead on hand-
crafted XPath expressions that extract the data from XML. This approach has a
number of benefits. First, the data binding code only needs proximate knowledge (e.g.,
topology, tag names, etc.) of the XML tree structure, which developers can determine
by looking at the XML data; XML schemas are no longer mandatory. Furthermore,
XPath allows the application to bind the relevant data items and filter out everything
else, avoiding the unnecessary processing that would be required to completely
unmarshall the entire XML document. The drawback of this approach is the lack of
automation in implementing the object model and XPath expressions. Instead the
application developers have to create these artifacts manually.
In the JAXB framework, we can parse XML documents into a suitable Java object.
Thistechnique is referred to as unmarshaling. The JAXB framework also provides
the capabilityto generate XML documents from Java objects, which is referred to
as marshaling.
JAXB is easier to use and a more efficient technique for processing XML documents
than the SAX or DOM API. Using the SAX API, you have to create a custom content
handler for each XML document structure. Also, during the development of the content,
you have to create and manage your own state machine to keep track of your place in the
document. For very complex XML documents, the development process is very cumbersome.
Using JAXB, an application can parse an XML document by simply unmarshaling
the data from an input stream.
JAXB is similar to DOM in that we can create XML documents programmatically
andperform validation
JAXB Solution
In the JAXB solution, we will model the rental property database as an
XML document.First we need to review the database schema. After
reviewing the schema, we willdevelop our desired XML document
based on an XML schema. After we have the XML
schema developed, we can create the JAXB binding schema. The JAXB
binding schemacontains instructions on how to bind the XML schema to
a Java class. We’ll take theJAXB binding schema and generate the
appropriate Java classes.
To summarize, we’ll follow these steps:
EX:-
QN:-4,
“Compare and Contrast well
formed and valid document”?
Well-Formed Documents:-
An XML document is well formed if it follows all the preceding syntax rules of
XML.On the other hand, if it includes inappropriate markup or characters that
cannot beprocessed by XML parsers, the document cannot be considered well
formed. It goeswithout saying that an XML document can’t be partially well
formed. And, by definition if a document is not well formed, it is not XML. This
means that there is no such thingas an XML document that is not well formed, and
XML processors are not required toprocess these documents
Valid Documents:-
Although the property of “well-formedness” is a matter of making sure the XML
documentcomplies to syntactical rules, the property of validity is a different
ballgame. Awell-formed XML document is considered valid only if it contains a
proper DocumentType Declaration and if the document obeys the constraints of
that declaration. In most cases, the constraints of the declaration will be expressed
as a DTD or an XML Schema.Well-formed XML documents are designed for use
without any constraints, whereas valid XML documents explicitly require these
constraint mechanisms. In addition to constraining the possible elements and the
ordering of those elements in a document, valid XML documents can take
advantage of certain advanced features of XML that are not available to merely
well-formed documents due to their lack of a DTD or XML Schema. Some of
these advanced features include linking mechanisms, value and range bounding,
and data typing.
Although the creation of well-formed XML is a simple process, the use of valid
XML documents can greatly improve the quality of document processes. Valid
XML documents allow users to take advantage of content management, business-
to-business transactions, enterprise integration, and other processes that require the
exchange of constrained XML documents. After all, any document can be well
formed, but only specific documents are valid when applied against a constraining
DTD or schema.
DIFFERENCE:-
1. Well-formed XML means that the XML is correct (it has only one root
node, and all elements match an end element tag)
Valid XML means that the XML can be validated to an XML Schema or DTD,
and that all the tags in the XML are also in the Schema or DTD, and in the
right place.
Well formed XML is XML that has all tags closed in the proper order and, if
it has a declaration, it has it first thing in the file with the proper attributes.
3:- well-formed XML conforms to the XML spec, and valid XML conforms
to a given schema.
4:- Another way to put it is that well-formed XML is syntactically correct (it can be
parsed), while valid XML issemantically correct (it can be matched to a known
vocabulary and grammar).
5:- An XML document cannot be valid until it is well-formed. All XML documents are
held to the same standard for well-formedness (an RFC put out by the W3). One XML
document can be valid against some schemas, and invalid against others. There are a
number of schema languages, many of which are themselves XML-based.
6:- Well-Formed XML is XML that meets the syntactic requirements of the language.
Not missing any closing tags, having all your singleton tags use <whatever /> instead
of just <whatever>, and having your closing tags in the right order.
7:- Valid XML is XML that uses a DTD and complies with all its requirements. So if you
use an attribute improperly, you violate the DTD and aren't valid.
8:- All valid XML is well-formed, but not all well-formed XML is valid.
9;- XML is well-formed if meets the requirements for all XML documents set
out by the standards - so things like having a single root node, having
nodes correctly nested, all nodes having a closing tag (or using the empty
node shorthand of a slash before the closing angle bracket), attributes
being quoted etc. Being well-formed just means it adheres to the rules of
XML and can therefore be parsed properly.
13:- If XML is confirming to DTD rules then it's a valid XML. If a XML document is
conforming to XML rules (all tags started are closed,there is a root element etc)then it's
a well formed XM.
14:- An XML document cannot be valid until it is well-formed. All XML documents are
held to the same standard for well-formedness (an RFC put out by the W3). One XML
document can be valid against some schemas, and invalid against others. There are a
number of schema languages, many of which are themselves XML-based
15:- Another way to put it is that well-formed XML is syntactically correct (it can be
parsed), while valid XML issemantically correct (it can be matched to a known
vocabulary and grammar)
16:- As others have said, well-formed XML conforms to the XML spec, and valid XML
conforms to a given schema.
17:- Valid XML is XML that succeeds validation against a DTD.
Well formed XML is XML that has all tags closed in the proper order and, if
it has a declaration, it has it first thing in the file with the proper attributes.
An extra advantage of using DTDs in this situation is that a single DTD could be
referenced by all the organization’s applications. The defined structure of the data
would be in a centralized resource, which means that any changes to the data
structure definition would only need to be implemented in one place. All the
applications that referenced the DTD would automatically use the new, updated
structure.
A DTD can be internal, residing within the body of a single XML document. It can
also be external, referenced by the XML document. A single XML document could
even have both a portion (or subset) of its DTD that is internal and a portion that is
external. As mentioned in the previous paragraph, a single external DTD can be
referenced by many XML documents. Because an external DTD may be
referenced by many documents, it is a good repository for global types of
definitions (definitions that apply to all documents). An internal DTD is good to
use for rules that only apply to that specific document. If a document has both
internal and external DTD subsets, the internal rules override the external rules in
cases where the same item is defined in both subsets.
EX:-
<?xml version=”1.0” encoding=”UTF-8”?>
<!ELEMENT PurchaseOrder (ShippingInformation, BillingInformation, Order)>
<!ATTLIST PurchaseOrder
Tax CDATA #IMPLIED
Total CDATA #IMPLIED
>
<!ELEMENT ShippingInformation (Name, Address, (((BillingDate,
➥PaymentMethod)) | ((DeliveryDate, Method))))>
<!ELEMENT BillingInformation (Name, Address, (((BillingDate,
➥PaymentMethod)) | ((DeliveryDate, Method))))>
<!ELEMENT Order (Product+)>
<!ATTLIST Order
SubTotal CDATA #IMPLIED
ItemsSold CDATA #IMPLIED
>
<!ELEMENT Name (#PCDATA)>
<!ELEMENT Address (Street, City, State, Zip)>
<!ELEMENT BillingDate (#PCDATA)>
<!ELEMENT PaymentMethod (#PCDATA)>
<!ELEMENT DeliveryDate (#PCDATA)>
<!ELEMENT Method (#PCDATA)>
<!ELEMENT Product EMPTY>
<!ATTLIST Product
Name CDATA #IMPLIED
Id CDATA #IMPLIED
Price CDATA #IMPLIED
Quantity CDATA #IMPLIED
>
<!ELEMENT Street (#PCDATA)>
<!ELEMENT City (#PCDATA)>
<!ELEMENT State (#PCDATA)>
<!ELEMENT Zip (#PCDATA)>
Validation
The process of checking to see if an XML document conforms to a schema
is called validation, which is separate from XML's core concept of
syntactic well-formedness. All XML documents must be well-formed, but it
is not required that a document be valid unless the XML parser is
"validating," in which case the document is also checked for conformance
with its associated schema. DTD-validating parsers are most common, but
some support W3C XML Schema or RELAX NG as well.
Documents are only considered valid if they satisfy the requirements of the
schema with which they have been associated. These requirements
typically include such constraints as:
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema”>
<xsd:annotation>
<xsd:documentation>
Purchase Order schema for an online grocery store.
</xsd:documentation>
</xsd:annotation>
<xsd:element name=”PurchaseOrder” type=”PurchaseOrderType”/>
<xsd:complexType name=”PurchaseOrderType”>
<xsd:all>
<xsd:element name=”ShippingInformation” type=”InfoType”
➥minOccurs=”1” maxOccurs=”1”/>
<xsd:element name=”BillingInformation” type=”InfoType”
➥minOccurs=”1” maxOccurs=”1”/>
<xsd:element name=”Order” type=”OrderType”
➥minOccurs=”1” maxOccurs=”1”/>
</xsd:all>
<xsd:attribute name=”Tax”>
<xsd:simpleType>
<xsd:restriction base=”xsd:decimal”>
<xsd:fractionDigits value=”2”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name=”Total”>
<xsd:simpleType>
<xsd:restriction base=”xsd:decimal”>
<xsd:fractionDigits value=”2”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:complexType>
<xsd:group name=”ShippingInfoGroup”>
<xsd:all>
<xsd:element name=”DeliveryDate” type=”DateType”/>
<xsd:element name=”Method” type=”DeliveryMethodType”/>
</xsd:all>
</xsd:group>
<xsd:group name=”BillingInfoGroup”>
<xsd:all>
<xsd:element name=”BillingDate” type=”DateType”/>
<xsd:element name=”PaymentMethod” type=”PaymentMethodType”/>
</xsd:all>
</xsd:group>
<xsd:complexType name=”InfoType”>
<xsd:sequence>
<xsd:element name=”Name” minOccurs=”1” maxOccurs=”1”>
<xsd:simpleType>
<xsd:restriction base=”xsd:string”/>
</xsd:simpleType>
</xsd:element>
<xsd:element name=”Address” type=”AddressType” minOccurs=”1”
➥maxOccurs=”1”/>
<xsd:choice minOccurs=”1” maxOccurs=”1”>
<xsd:group ref=”BillingInfoGroup”/>
<xsd:group ref=”ShippingInfoGroup”/>
</xsd:choice>
</xsd:sequence>
</xsd:complexType>
<xsd:simpleType name=”DateType”>
<xsd:restriction base=”xsd:date”/>
</xsd:simpleType>
<xsd:simpleType name=”DeliveryMethodType”>
<xsd:restriction base=”xsd:string”>
<xsd:enumeration value=”USPS”/>
<xsd:enumeration value=”UPS”/>
<xsd:enumeration value=”FedEx”/>
<xsd:enumeration value=”DHL”/>
<xsd:enumeration value=”Other”/>
</xsd:restriction>
</xsd:simpleType>
<xsd:simpleType name=”PaymentMethodType”>
<xsd:restriction base=”xsd:string”>
<xsd:enumeration value=”Check”/>
<xsd:enumeration value=”Cash”/>
<xsd:enumeration value=”Credit Card”/>
<xsd:enumeration value=”Debit Card”/>
<xsd:enumeration value=”Other”/>
</xsd:restriction>
</xsd:simpleType>
<xsd:complexType name=”AddressType”>
<xsd:all>
<xsd:element name=”Street” minOccurs=”1”>
<xsd:simpleType>
<xsd:restriction base=”xsd:string”/>
</xsd:simpleType>
</xsd:element>
<xsd:element name=”City” minOccurs=”1” maxOccurs=”1”>
<xsd:simpleType>
<xsd:restriction base=”xsd:string”/>
</xsd:simpleType>
</xsd:element>
<xsd:element name=”State” type=”StateType” minOccurs=”1”
➥maxOccurs=”1”/>
<xsd:element name=”Zip” type=”ZipType” minOccurs=”1”
➥maxOccurs=”1”/>
</xsd:all>
</xsd:complexType>
<xsd:simpleType name=”ZipType”>
<xsd:restriction base=”xsd:string”>
<xsd:minLength value=”5”/>
<xsd:maxLength value=”10”/>
<xsd:pattern value=”[0-9]{5}(-[0-9]{4})?”/>
</xsd:restriction>
</xsd:simpleType>
<xsd:simpleType name=”StateType”>
<xsd:restriction base=”xsd:string”>
<xsd:length value=”2”/>
<xsd:enumeration value=”AR”/>
<xsd:enumeration value=”LA”/>
<xsd:enumeration value=”MS”/>
<xsd:enumeration value=”OK”/>
<xsd:enumeration value=”TX”/>
</xsd:restriction>
</xsd:simpleType>
<xsd:complexType name=”OrderType”>
<xsd:sequence>
<xsd:element name=”Product” type=”ProductType”
➥minOccurs=”1” maxOccurs=”unbounded”/>
</xsd:sequence>
<xsd:attribute name=”SubTotal”>
<xsd:simpleType>
<xsd:restriction base=”xsd:decimal”>
<xsd:fractionDigits value=”2”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name=”ItemsSold” type=”xsd:positiveInteger”/>
</xsd:complexType>
<xsd:complexType name=”ProductType”>
<xsd:attribute name=”Name” type=”xsd:string”/>
<xsd:attribute name=”Id” type=”xsd:positiveInteger”/>
<xsd:attribute name=”Price”>
<xsd:simpleType>
<xsd:restriction base=”xsd:decimal”>
<xsd:fractionDigits value=”2”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name=”Quantity” type=”xsd:positiveInteger”/>
</xsd:complexType>
</xsd:schema>
COMPARISON:-
Part of the reason why XML Schema is namespace aware while DTD
is not, is the fact that XML Schema is written in XML, and DTD is not. Therefore,
XML Schemas can be programmatically processed just like any XML document.
XML Schema also eliminates the need to learn another language, as it is written
in XML, unlike DTD.
1:- DTD can have only two types of data, the CDATA and the PCDATA. But
in a schema you can use all the primitive data type that you use in the
programming language and you have the flexibility of defining your own
custom data types.
The developer building a schema can create custom data types based
on the core data types and by using different operators and modifiers.
3:- Data types like integer, string, floating point numbers and other data
like country codes and language codes can be used in a schema to give
flexibility to the validations
4:- . XML Schema is namespace aware, while DTD is not.
2. XML Schemas are written in XML, while DTDs are not.
3. XML Schema is strongly typed, while DTD is not.
4. XML Schema has a wealth of derived and built-in data types
that are not available in DTD.
5. XML Schema does not allow inline definitions, while DTD does.
A Schema is:
XML Schemas express shared vocabularies and allow machines to carry out
rules made by people. They provide a means for defining the structure,
content and semantics of XML documents.
6:- The critical difference between DTDs and XML Schema is that XML
Schema utilize an XML-based syntax, whereas DTDs have a unique syntax
held over from SGML DTDs. Although DTDs are often criticized because of
this need to learn a new syntax, the syntax itself is quite terse. The
opposite is true for XML Schema, which are verbose, but also make use of
tags and XML so that authors of XML should find the syntax of XML
Schema less intimidating
7:- The goal of DTDs was to retain a level of compatibility with SGML for applications
that might want to convert SGML DTDs into XML DTDs. However, in keeping with one
of the goals of XML, "terseness in XML markup is of minimal importance," there is no
real concern with keeping the syntax brief.
[...]
8;- Typing
The most significant difference between DTDs and XML Schema is the
capability to create and use datatypes in Schema in conjunction with
element and attribute declarations. In fact, it's such an important difference
that one half of the XML Schema Recommendation is devoted to
datatyping and XML Schema. We cover datatypes in detail in Part III of this
book, "XML Schema Datatypes."
[...]
[...]
10;- Enumerations
So, let's say we had a element, and we wanted to be able to define a size attribute for
the shirt, which allowed users to choose a size: small, medium, or large. Our DTD would
look like this:
[...]
11:- DTD predates XML and is therefore not valid XML itself. That's probably the
biggest reason for XSD's invention.
12:- One difference is also that in a DTD, the content model of an element is
completely determined by its name, independently of where it appears in the document.
So, say you want to have a name child element of your person element that itself has
child elements first and last. Then if you wanted to have a name child element for
a city element in the same document, that would also need to have child
elements first and last. In contrast, XML Schema allows you to declare child
element types locally, so in this case you could declare the name child elements for
both person and cityseparately, giving them their proper content models in those
contexts
13;- The other major difference is support for namespaces. Since DTDs are part of the
original XML specification (and inherited from SGML), they are not namespace-aware at
all because XML namespaces were specified later. You can use DTDs in combination
with namespaces, but it requires some contortions, like being forced to define the
prefixes in the DTD and using only those prefixes, instead of being able to use arbitrary
prefixes
QN:-
What is “rdf” Explain the functionality of
rdf with suitable example?
What is RDF?
RDF stands for Resource Description Framework
RDF is a framework for describing resources on the web
RDF is designed to be read and understood by computers
RDF is not designed for being displayed to people
RDF is written in XML
RDF is a part of the W3C's Semantic Web Activity
RDF is a W3C Recommendation
RDF EXAMPLE
<?xml version="1.0"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cd="http://www.recshop.fake/cd#">
<rdf:Description
rdf:about="http://www.recshop.fake/cd/Empire
Burlesque">
<cd:artist>Bob Dylan</cd:artist>
<cd:country>USA</cd:country>
<cd:company>Columbia</cd:company>
<cd:price>10.90</cd:price>
<cd:year>1985</cd:year>
</rdf:Description>
<rdf:Description
rdf:about="http://www.recshop.fake/cd/Hide your heart">
<cd:artist>Bonnie Tyler</cd:artist>
<cd:country>UK</cd:country>
<cd:company>CBS Records</cd:company>
<cd:price>9.90</cd:price>
<cd:year>1988</cd:year>
</rdf:Description>
.
.
.
</rdf:RDF>
The first line of the RDF document is the XML declaration. The XML
declaration is followed by the root element of RDF documents: <rdf:RDF>.
RDF Statements
Rdf schema:-
RDF Schema (variously abbreviated as RDFS, RDF(S), RDF-S, or RDF/S) is an
extensible knowledge representation language, providing basic elements for the
description of ontologies, otherwise called Resource Description Framework (RDF)
vocabularies, intended to structure RDF resources. The first version[1] was published by
the World-Wide Web Consortium (W3C) in April 1998, and the final[2] W3C
recommendation was released in February 2004.
To “connect the dots” in the RDF typing system, we need to cover two concepts:
• rdf:type
• rdfs:subclassOf
rdf:type
enables class/instance statements to be made. When a resource has a type, the
resource is the object in a statement where the predicate is [rdf:type] and the subject
is the type.
Rdf : subclass
rdfs:subclassOf enables
subset/superset statements to be made. The class is the
superset; the subclass is the subset. When a resource is a subclass, the resource is
the object in a statement where the predicate is rdfs:subClassOf and the subject is an
RDF class.
The 16 RDF schema resources are divided into the following
six categories:
• Validation
• Core
• Hierarchy
• Documentation
• Schema control
• Extensibility
Validation:-
operation many information owners will want to perform on their data, just as they
want a database schema to control the quality of their RDBMS and they want an
XML DTD or XML schema to provide some level of quality
assurance for their data. Recall that there are two forms of schema validation in
RDF: object validity and subject validity. rdfs:domain handles subject validity;
rdfs:range handles object validity.
rdfs:domain
rdfs:domain is a type of rdfs:ConstraintProperty. It constrains the classes of subjects
(resources) for which the property is a valid predicate. If a property has no domain,
it can be the predicate of any subject. A property may have more than one range. If
a property has more than one rdfs:domain constraint, it may be the predicate of
subjects that are
subclasses of any one or all of the specified classes. The range and domain of the
rdfs:domain concept are specified only in a comment, so there is no pictorial
representation of rdfs:domain.
rdfs:range
rdfs:range is a type of rdfs:ConstraintProperty. It constrains the classes of objects
(resources) for which the property is a valid predicate. A property doesn’t have to
have a range. If so, the property can be used as an object in any statement.
However, when imposed, the constrains of rdfs:range are stronger than those
imposed by rdfs:domain.
First, a property can have only one range. Second, the domain (subject) of an
rdfs:range predicate must be an rdf:property, and its range (object) must be an rdfs:Class.
Core
Now that you understand the key concern of information owners—validation—
let’s move to the top of the RDF class hierarchy
rdfs:Resource
rdfs:Resource is the root of the RDF class hierarchy (refer back to Figure 23.8). All
things described by RDF expressions—all nodes and labels in the RDF graph—are
instances of rdfs:Resource. rdfs:Resource is also a class. In fact, rdfs:Resource is a type of
rdfs:Class, and rdfs:Class is a subclass of rdfs:Resource.
rdf:property
rdf:property represents the subset of RDF resources that are properties (see Table
23.1). rdf:property is a type of rdfs:Class and a subclass of rdfs:Resource. rdf:property has the
“rdf” namespace prefix, rather than the “rdfs” prefix, because the RDF model has
implicit properties, even if it lacks a schema.
rdf:type
rdf:type indicates that a resource is an instance of a specified class. That class must
be an instance of rfds:Class or a subclass of rdfs:Class. This statement is true for the
resource that is known as rdfs:Class, which is a type of itself. (The
RDF graph, again, permits loops.)
Like rdf:property, rdf:type has the “rdf” namespace prefix,
rather than the “rdfs” prefix, because the RDF model has implicit types, even if it
lacks a schema.
Class Hierarchy
The class hierarchy in RDF is set up by with rdfs:class, rdfs:subClassOf, and
rdfs:subPropertyOf (and rdf:type, which we’ve already looked at). Let’s look at these
three elements.
rdfs:Class
As you have seen, rdfs:Class is both a type of resource and a subclass of
itself.However, RDF classes are both like and unlike classes as OO programmers
may think of them. RDF classes are like OO classes in that, through transitivity,
they can specify broad-to-specific categories such as “living being to animal to
dog.” RDF classes are
unlike OO classes, first, because they have no methods—they don’t do anything.
(Markup never does.) Second, RDF classes could be called extrinsic rather
intrinsic. Instead of defining a class in terms of features intrinsic to its instances, an
RDF schema will define predicates in terms of the classes of subject or object to
which they may be
applied, extrinsically. (This allows testing for subject and object validity.)
rdfs:subClassOf
rdfs:subClassOf is
a type of rdf:property. It specifies a subset/superset relation between
classes—a relation that is transitive, Only instances of the type rdfs:Class may have
an rdfs:type property whose value is rdfs:Class.
Importantly, a class can never be declared to be a subclass of itself or any of its
own subclasses. (The RDF Schema specification cannot express this constraint
formally, though it is expressed in prose.) Therefore, although the RDF graph may
contain cycles, the class/subclass inheritance hierarchy that is a subgraph of the
RDF graph remains a tree, whose nodes are only instances of rdfs:Class. Finally,
RDF (unlike most object-oriented programming languages) permits multiple
inheritance—that is, a class may be a subclass of several classes.
rdfs:subPropertyOf
rdfs:subProperty isa type of rdf:property. It enables properties to be specialized—
aprocess similar to inheritance, except for properties instead of classes. Like the
subClassOf predicate, subPropertyOf is transitive and forms a hierarchy that is a proper
tree, like rdfs:subClassOf. Multiple specializations are also permitted
Documentation
Documentation allows human-readable text to be attached to a resource, either as a
label or a comment. Because the content of the documentation elements is only
data, not statements, it does not affect the RDF graph in any way and therefore
does not enable machine understanding of the resource.
rdfs:label
rdfs:label provides
for a human-readable representation of a URI, perhaps for
display.The domain (subject) of a label predicate must be an rdfs:resource. The range
(object) must an rdf:literal.
rdfs:comment
rdfs:comment permits
human-readable documentation to be associated with a
resource. The domain (subject) of a comment predicate must be an rdfs:resource. The
range (object) must an rdf:literal.
rdfs:seeAlso
rdfs:seeAlso is
a cross-reference that gives more information about a resource. The
nature of the information provided is not defined. The domain (subject) and range
(object) of an rdfs:seeAlso predicate must both be rdfs:resource elements
Schema Control
rdfs:isDefinedBy isa subproperty of rdfs:seeAlso. It’s URI is meant to be the address of
the RDF Schema for the subject resource. The domain (subject) and range (object)
of an rdfs:isDefinedBy predicate must both be rdfs:resource elements.
General Constraints
We now turn to the issue of constraints in general (that is, beyond the constraints
on domain and range, discussed earlier). At this point, there’s one caveat: Because
markup doesn’t do anything, RDFS doesn’t say what an application must do if a
constraint is violated. That is up to the application.
rdfs:ConstraintProperty
rdfs:ConstraintProperty is a subclass of both rdfs:ConstraintResource and rdf:property. Both
rdfs:domain and rdfs:range are instances of it.
Extensibility
rdfs:ConstraintResource is
a type of rdfs:Class and a type of rdfs:Resource. It is present in the
model so that other constraint properties besides domain and range may be
subclassed from it.
Non-Model Validation
This type of validation is called “non-model” because expressing the notion that a
literal should be checked for being a literal or that the auto-generated counter for
container children should be derived from the actual number of children is
something the RDF engine would have to do, not the data model.
rdfs:literal
rdfs:literal is
a type of rdfs:Class. An rdfs:literal can contain atomic values such as textual
strings. The XML lang attribute can be used to express the fact that a literal is in a
human language, but this information does not become a statement in the graph.
rdfs:ContainerMembershipProperty
rdfs:ContainerMemberShip is a type of rdfs:class and subclass of rdf:property. Its members
are the properties _1, _2, _3, and so on (the order in which the children of a
container appear in the container, under the ord component of the data model).
Ex:-
<?xml version="1.0"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xml:base="http://www.animals.fake/animals#">
<rdf:Description rdf:ID="animal">
<rdf:type
rdf:resource="http://www.w3.org/2000/01/rdf-
schema#Class"/>
</rdf:Description>
<rdf:Description rdf:ID="horse">
<rdf:type
rdf:resource="http://www.w3.org/2000/01/rdf-
schema#Class"/>
<rdfs:subClassOf rdf:resource="#animal"/>
</rdf:Description>
</rdf:RDF>
QN:-
What is ajax , Explain how using x Queries data are
retrieve from an xml document with suitable Examples?
AJAX:-
Is a group of interrelated web
development techniques used on the client-
side to create interactive web applications. With Ajax, web applications can
retrieve data from the server asynchronously in the background without interfering
with the display and behavior of the existing page. The use of Ajax techniques has led
citation needed]
to an increase in interactive or dynamic interfaces on web pages[ . Data is
usually retrieved using the XMLHttpRequest object. Despite the name, the use
of XML is not actually required, and the requests do not need to
[2]
be asynchronous.
Technologies
The term Ajax has come to represent a broad group of web technologies that can be
used to implement a web application that communicates with a server in the
background, without interfering with the current state of the page. In the article that
coined the term Ajax,[1] Jesse James Garrett explained that the following technologies
are incorporated:
HTML or XHTML and CSS for presentation
the Document Object Model (DOM) for dynamic display of and interaction with
data
XML for the interchange of data, and XSLT for its manipulation
the XMLHttpRequest object for asynchronous communication
JavaScript to bring these technologies together
Since then, however, there have been a number of developments in the technologies
used in an Ajax application, and the definition of the term Ajax. In particular, it has been
noted that:
JavaScript is not the only client-side scripting language that can be used for
implementing an Ajax application. Other languages such as VBScript are also
capable of the required functionality.[2][9]However JavaScript is the most popular
language for Ajax programming due to its inclusion in and compatibility with the
majority of modern web browsers.
XML is not required for data interchange and therefore XSLT is not required for
the manipulation of data. JavaScript Object Notation (JSON) is often used as an
alternative format for data interchange,[10] although other formats such as
preformatted HTML or plain text can also be used.[11]
Basic Ajax involves writing ad hoc JavaScript in Web pages for the client. A simpler if
cruder alternative is to use standard JavaScript libraries that can partially update a
page, such as ASP.Net'sUpdatePanel. Tools (or web application frameworks) such
as Echo and ZK enable fine grained control of a page from the server, using only
standard JavaScript libraries.
Drawbacks
The language also provides syntax allowing new XML documents to be constructed.
Where the element and attribute names are known in advance, an XML-like syntax can
be used; in other cases, expressions referred to as dynamic node constructors are
available. All these constructs are defined as expressions within the language, and can
be arbitrarily nested.
The type system of the language models all values as sequences (a singleton value is
considered to be a sequence of length one). The items in a sequence can either be
nodes or atomic values. Atomic values may be integers, strings, booleans, and so on:
the full list of types is based on the primitive types defined in XML Schema.
XQuery 1.0 does not include features for updating XML documents or databases; it also
lacks full text search capability. These features are both under active development for a
subsequent version of the language.
XQuery is a programming language that can express arbitrary XML to XML data
transformations with the following features:
1. Logical/physical data independence
2. Declarative
3. High level
4. Side-effect free
5. Strongly typed
EX:-
<books-with-prices>
{
for $b in document("http://www.bn.com/bib.xml")//book,
$a in
document("http://www.amazon.com/reviews.xml")//entry
where $b/title = $a/title
return
<book-with-prices>
{ $b/title }
<price-amazon>{ $a/price/text() }</price-amazon>
<price-bn>{ $b/price/text() }</price-bn>
</book-with-prices>
}
</books-with-prices>
EX:-
<html><head/><body>
{
for $act in doc("hamlet.xml")//ACT
let $speakers := distinct-
values($act//SPEAKER)
return
<div>
<h1>{ string($act/TITLE) }</h1>
<ul>
{
for $speaker in $speakers
return <li>{ $speaker }</li>
}
</ul>
</div>
}
</body></html>
Schema ex:-
<xsd:schema targetns="http://www.example.com/answer"
xmlns="http://www.example.com/answer"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
elementFormDefault="qualified">
<xsd:element name="ANSWER">
<xsd:complexType>
<xsd:sequence>
<xsd:element ref="BOOK"
minOccurs="0" maxOccurs="unbounded"/>
<xsd:complexType>
<xsd:sequence>
<xsd:element name="TITLE" type="xsd:string"/>
<xsd:element name="AUTHOR" type="xsd:string"
minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
X-query:-
element ANSWER { BOOK* }
element BOOK { TITLE, AUTHOR+ }
element AUTHOR { xsd:string }
element TITLE { xsd:string }
Applications
Below are a few examples of how XQuery can be used:
XSLT is currently stronger than XQuery for applications that involve making small
changes to a document (for example, deleting all the NOTE elements). Such
applications are generally handled in XSLT by use of a coding pattern that involves an
identity template that copies all nodes unchanged, modified by specific templates that
modify selected nodes. XQuery has no equivalent to this coding pattern, though in
future versions it will be possible to tackle such problems using the update facilities in
the language that are under development.[7]
Another facility lacking from XQuery is any kind of mechanism for dynamic binding or
polymorphism. The absence of this capability starts to become noticeable when writing
large applications, or when writing code that is designed to be reusable in different
environments. XSLT offers two complementary mechanisms in this area: the dynamic
matching of template rules, and the ability to override rules using xsl:import, that
make it possible to write applications with multiple customization layers.
The absence of these facilities from XQuery is a deliberate design decision: it has the
consequence that XQuery is very amenable to static analysis, which is essential to
achieve the level of optimization needed in database query languages. This also makes
it easier to detect errors in XQuery code at compile time.
The fact that XSLT 2.0 uses XML syntax makes it rather verbose in comparison to
XQuery 1.0. However, many large applications take advantage of this capability by
using XSLT to read, write, or modify stylesheets dynamically as part of a processing
pipeline. The use of XML syntax also enables the use of XML-based tools for managing
XSLT code. By contrast, XQuery syntax is more suitable for embedding in traditional
programming languages such as Java or C#. If necessary, XQuery code can also be
expressed in an XML syntax called XQueryX. The XQueryX representation of XQuery
code is rather verbose and not convenient for humans, but can easily be processed with
XML tools, for example transformed with XSLT stylesheets.[8][9]
QN:-
While the term "Semantic Web" is not formally defined it is mainly used to
describe the model and technologies[3] proposed by the W3C. These technologies
include the Resource Description Framework (RDF), a variety of data interchange
formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF
Schema (RDFS) and the Web Ontology Language (OWL), all of which are
intended to provide a formal description of concepts, terms, and relationships
within a given knowledge domain.
The key element is that the application in context will try to determine the meaning
of the text or other data and then create connections for the user. The evolution of
Semantic Web will specifically make possible scenarios that were not otherwise,
such as allowing customers to share and utilize computerized applications
simultaneously in order to cross reference the time frame of activities with
documentation and/or data. According to the original vision, the availability of
machine-readable metadata would enable automated agents and other software to
access the Web more intelligently. The agents would be able to perform tasks
automatically and locate related information on behalf of the user.
Many of the technologies proposed by the W3C already exist and are used in
various projects. The Semantic Web as a global vision, however, has remained
largely unrealized and its critics have questioned the feasibility of the approach.
Purpose
The main purpose of the Semantic Web is driving the evolution of the current
Web by allowing users to use it to its full potential thus allowing users to
find,share, and combine information more easily. Humans are capable of using the
Web to carry out tasks such as finding the Irish word for "folder," reserving a
library book, and searching for a low price for a DVD. However, machines cannot
accomplish all of these tasks without human direction, because web pages are
designed to be read by people, not machines. The semantic web is a vision of
information that can be interpreted by machines, so machines can perform more of
the tedious work involved in finding, combining, and acting upon information on
the web.
OWL:-
The Web Ontology Language (OWL) is a family of knowledge representation
languages for authoring ontologies. The languages are characterised by formal
semantics and RDF/XML-based serializations for the Semantic Web. OWL is
endorsed by the World Wide Web Consortium (W3C)[1] and has attracted
academic, medical and commercial interest
Example
What is OWL?
OWL stands for Web Ontology Language
OWL is built on top of RDF
OWL is for processing information on the web
OWL was designed to be interpreted by computers
OWL was not designed for being read by people
OWL is written in XML
OWL has three sublanguages
OWL is a W3C standard
What is Ontology?
Ontology is about the exact description of things and their relationships.
For the web, ontology is about the exact description of web information and relationships
between web information.
Why OWL?
OWL is a part of the "Semantic Web Vision" - a future where:
OWL comes with a larger vocabulary and stronger syntax than RDF.
OWL Sublanguages
OWL has three sublanguages:
OWL Lite
OWL DL (includes OWL Lite)
OWL Full (includes OWL DL)
A W3C Recommendation is understood by the industry and the web community as a web
standard. A W3C Recommendation is a stable specification developed by a W3C Working
Group and reviewed by the W3C Membership.
Species
[edit] OWL sublanguages
OWL Lite was originally intended to support those users primarily needing a
classification hierarchy and simple constraints. For example, while it supports
cardinality constraints, it only permits cardinality values of 0 or 1. It was hoped
that it would be simpler to provide tool support for OWL Lite than its more
expressive relatives, allowing quick migration path for systems utilizing thesauri
and other taxonomies. In practice, however, most of the expressiveness constraints
placed on OWL Lite amount to little more than syntactic inconveniences: most of
the constructs available in OWL DL can be built using complex combinations of
OWL Lite features. Development of OWL Lite tools has thus proven almost as
difficult as development of tools for OWL DL, and OWL Lite is not widely used.
[edit] OWL DL
OWL Full is based on a different semantics from OWL Lite or OWL DL, and was
designed to preserve some compatibility with RDF Schema. For example, in OWL
Full a class can be treated simultaneously as a collection of individuals and as an
individual in its own right; this is not permitted in OWL DL. OWL Full allows an
ontology to augment the meaning of the pre-defined (RDF or OWL) vocabulary. It
is unlikely that any reasoning software will be able to support complete reasoning
for OWL Full.
[edit] Syntax
The OWL family of languages support a variety of syntaxes. It is useful to
distinguish high level syntaxes aimed at specification from exchange syntaxes
more suitable for general use.
These are close to the ontology structure of languages in the OWL family.
[edit] OWL abstract syntax
This high level syntax is used to specify the OWL ontology structure and
semantics.[22]
Sean Bechhofer, et. al. argue that though this syntax is hard to parse, it is quite
concrete. They conclude that the name abstract syntax may be somewhat
misleading.[23]
Syntactic mappings into RDF are specified[22][25] for languages in the OWL family.
Several RDF serialization formats have been devised. Each leads to a syntax for
languages in the OWL family through this mapping. RDF/XML is normative.[22][25]
The Manchester Syntax is a compact, human readable syntax with a style close to
frame languages. Variations are available for OWL and OWL2. Not all OWL and
OWL2 ontologies can be expressed in this syntax
An ORB uses the CORBA Interface Repository to find out how to locate and
communicate with a requested component. When creating a component, a
programmer uses either CORBA's Interface Definition Language (IDL) to declare
its public interfaces or the compiler of the programming language translates the
language statements into appropriate IDL statements. These statements are stored
in the Interface Repository as metadata or definitions of how a component's
interface works.
Life cycle services, which define how to create, copy, move, and delete a component
Persistence service, which provide the ability to store data on object database, , and
plain files
Naming service, which allows a component to find another component by name and
also supports existing naming systems or directories, including
DCE, , and Sun's NIS (Network Information System).
Event service, which lets components specify events that they want to be notified of
Concurrency control service, which allows an ORB to manage locks to data that
transactions or threads may compete for
transaction service, which ensures that when a transaction is completed, changes are
committed, or that, if not, database changes are restored to their pre-transaction state
Relationship service, which creates dynamic associations between components that
haven't "met" before and for keeping track of these associations
Externalization service, which provides a way to get data to and from a component in a
"stream"
Query service, which allows a component to query a database. This service is based on
the SQL3 specification and the Object Database Management Group's (ODMG) Object
Query Language (OQL).
Licensing service, which allows the use of a component to be measured for purposes of
compensation for use. Charging can be done by session, by node, by instance creation,
and by site.
Properties service, which lets a component contain a self-description that other
components can use.
In addition, an ORB also can provide security and time services. Additional
services for trading, collections, and change management are also planned. The
requests and replies that originate in ORBs are expressed through the Internet
Inter-ORB Protocol (IIOP) or other transport layer protocols.
TP-MONITOR:-
FUNCTION:-
Transaction processing is supported by programs called transaction processing
monitors (TP monitors). TP monitors perform the following three types of
functions:
Features
Rapid response
Fast performance with a rapid response time is critical. Businesses cannot afford to
have customers waiting for a TPS to respond, the turnaround time from the input of
the transaction to the production for the output must be a few seconds or less.
Reliability
Many organizations rely heavily on their TPS; a breakdown will disrupt operations
or even stop the business. For a TPS to be effective its failure rate must be very
low. If a TPS does fail, then quick and accurate recovery must be possible. This
makes well–designed backup and recovery procedures essential.
[edit] Inflexibility
A TPS wants every transaction to be processed in the same way regardless of the
user, the customer or the time for day. If a TPS were flexible, there would be too
many opportunities for non-standard operations, for example, a commercial airline
needs to consistently accept airline reservations from a range of travel agents,
accepting different transactions data from different travel agents would be a
problem.
A transaction’s changes to the state are atomic: either all happen or none happen.
These changes include database changes, messages, and actions on transducers.[2]
[edit] Consistency
[edit] Durability
[edit] Concurrency
Ensures that two users cannot change the same data at the same time. That is, one
user cannot change a piece of data before another user has finished with it. For
example, if an airline ticket agent starts to reserve the last seat on a flight, then
another agent cannot tell another passenger that a seat is available.
QN:-
Compare and contrast the tightly Coupled and Loosly Coupled Web
Services
Tight coupling versus loose coupling
Most large, complex systems are built as small collections of large subsystems
instead of as large collections of small, independent subsystems. This is because of
the potential for increased performance, security, economy, or some other key
property that you can't get by decoupling the system into relatively independent,
small elements. The tight coupling characteristics of large-scale systems generally
result from optimizing the overall design and from minimizing redundancies and
inefficiencies among the system's components. This results in closer coupling
among the system's components and large numbers of critical interdependencies.
You can change details in loosely coupled Web services as long as those changes
don't affect the functionality of the called Web services. The tight-coupled systems
can be difficult to maintain, because changes in one system subcomponent usually
require the other subcomponent to adapt immediately.
Web services are normally message based and loosely coupled whether
the resource is scarce or not; they wait for an answer via message
queuing before they take further action, if any, based on the contents of
these messages. They have the advantage of messages being passed
instead of method invocations and provide a degree of independence
between the sending and receiving Web services.
QN:-
EXPLAIN WEB SERVICES SECURITY,
SERVICE CONTRACT, SERVICE LEASE
AND RPC?
Features
WS-Security describes three main mechanisms:
How to sign SOAP messages to assure integrity. Signed messages provide
also non-repudiation.
How to encrypt SOAP messages to assure confidentiality.
How to attach security tokens.
The token formats and semantics are defined in the associated profile
documents.
WS-Security incorporates security features in the header of a SOAP
message, working in the application layer.
SERVICE CONTRACT:-
RPC:-
In computer science, a remote procedure call (RPC) is an inter-process
communication that allows a computer program to cause a subroutine or
procedure to execute in another address space (commonly on another computer
on a shared network) without the programmer explicitly coding the details for this
remote interaction. That is, the programmer writes essentially the same code
whether the subroutine is local to the executing program, or remote. When the
software in question uses object-oriented principles, RPC is called remote
invocation or remote method invocation
Message passing
An RPC is initiated by the client, which sends a request message to a known
remote server to execute a specified procedure with supplied parameters. The
remote server sends a response to the client, and the application continues its
process. There are many variations and subtleties in various implementations,
resulting in a variety of different (incompatible) RPC protocols. While the server is
processing the call, the client is blocked (it waits until the server has finished
processing before resuming execution).
An important difference between remote procedure calls and local calls is that
remote calls can fail because of unpredictable network problems. Also, callers
generally must deal with such failures without knowing whether the remote
procedure was actually invoked. Idempotent procedures (those that have no
additional effects if called more than once) are easily handled, but enough
difficulties remain that code to call remote procedures is often confined to
carefully written low-level subsystems.
1. The client calls the Client stub. The call is a local procedure call, with parameters pushed
on to the stack in the normal way.
2. The client stub packs the parameters into a message and makes a system call to send
the message. Packing the parameters is called marshalling.
3. The kernel sends the message from the client machine to the server machine.
4. The kernel passes the incoming packets to the server stub.
5. Finally, the server stub calls the server procedure. The reply traces the same in other
direction
What Is RPC
Consider an example:
We use UNIX to run a remote shell and execute the command this way.
There are some problems with this method:
the command may be slow to execute.
You require an login account on the remote machine.
For the protocol you must identify the name of the service procedures,
and data types of parameters and return arguments.
rpcgen uses its own language (RPC language or RPCL) which looks
very similar to preprocessor directives.