Labels

Thursday, April 30, 2009

XML vs Binary Serialization - Quick Reference

ISerializable interface - A single method, named GetObjectData, which is responsible for storing object fields during the serialization process. To do this, you just need to populate a passed SerializationInfo object.

 

XML Serialization –

 

  • Seralizes only Public fields and properties of an object, or the parameters and return values of methods, into an XML stream.
  • Smaller in Size than Binary

 

Bianry Serialization –

 

  • Seralizes the public and private fields of the object and the name of the class, including the assembly containing the class, are converted to a stream of bytes, which is then written to a data stream
  • Larger in Size than XML

 

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Well Formed vs Valid XML Document - Quick Reference

An XML document has two correctness levels:

 

  • Well-formed. A well-formed document conforms to the XML syntax rules; e.g. if a start-tag (< >) appears without a corresponding end-tag (</>), it is not well-formed. A document not well-formed is not in XML; a conforming parser is disallowed from processing it.

 

  • Valid. A valid document additionally conforms to semantic rules, either user-defined or in an XML schema, especially DTD; e.g. if a document contains an undefined element, then it is not valid; a validating parser is disallowed from processing it.

 

Below is not well-formed.

 

<!-- WRONG! NOT WELL-FORMED XML! -->
<title>Book on Logic<author>Aristotle</title></author>
 
<!-- Correct: well-formed XML fragment. -->
<title>Book on Logic</title> <author>Aristotle</author>

 

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Xml parsers - Recap

Xml Parsers

 

Three traditional techniques for processing XML files are:

 

  • Using a programming language and the SAX API.  – Push Parser
  • Using a programming language and the DOM APIObject Model Parser
  • Using a transformation engine and a filter

 

More recent and emerging techniques for processing XML files are:

 

  • Pull Parsing - System.Xml.XmlReader in .NET, STAX in JAVA ((Streaming API for XML))
  • Non-Extractive Parsing (i.e. in-situ parsing)
  • Data binding

 

SAX Parser –

 

  • SAX isa lexical, event-driven interface in which a document is read serially and its contents are reported as "callbacks" to various methods on a handler object of the user's design.
  • SAX is fast and efficient to implement, but difficult to use for extracting information at random from the XML, since it tends to burden the application author with keeping track of what part of the document is being processed.
  • It is better suited to situations in which certain types of information are always handled the same way, no matter where they occur in the document.

 

Pull Parser -

 

  • A pull parser creates an iterator that sequentially visits the various elements, attributes, and data in an XML document.
  • Code which uses this 'iterator' can test the current item (to tell, for example, whether it is a start or end element, or text), and inspect its attributes (local name, namespace, values of XML attributes, value of text, etc.), and can also move the iterator to the 'next' item.
  • The code can thus extract information from the document as it traverses it.

 

Recap of XML Parser Types

 

  • Generally, there are two types of XML parsers.
  • First are the Push- And Pull-Parsers that simply read a XML document and return the data and structure of the document (e.g., SAX and StAX).
  • Both are event-driven parsers because they return events that the developer has to handle.

 

  • Push parsers implementations like SAX (Simple API for XML) return the data of the whole document in one stream and cannot be stopped (you could throw an exception in Java).
  • Pull-parsers, on the other hand, only return data when they are asked to read the next node in a document. StAX in JAVA & System.Xml.XmlReader in .NET is a pull-parser

 

  • The second type of XML parsers are Object Model Parsers (e.g., DOM and Apache AXIOM), which not only read the data but also construct an in-memory representation of the document, which can be altered. Since DOM parsers mostly use SAX parsers to read in the documents, it is clear that the object model of a document is always built completely. This is a performance limitation if only data at the beginning of a document needs to be read and altered. New approaches like Apache's AXIOM make use of StAX pull-parser implementations to overcome this limitation. AXIOM only builds the tree representation of a document until the last node that was requested. Therefore, it does not need to read the complete document.

 

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

 

Wednesday, April 29, 2009

Future in the Cloud - Windows Azure

Hi,

 

There has already been a lot of buzz around cloud computing(CC). Microsoft announced their offering (Azure) in this space recently in the PDC.

I have been reading about CC for some time. Thought to make it public today.

 

Why does cloud computing look a plausible option? What are the key drivers?

 

The key drivers are:

 

Infrastructure set up and maintenance cost:

 

Cloud computing will reduce the cost associated with setting up the IT infrastructure and maintenance. This will be a real boon for SMBs or start-ups as they will be able to put the money at the right place i.e. on innovation, technology etc and not on the IT infrastructure set-up.

 

Scalability

Of businesses : Cloud computing would certainly help in scaling up and down of businesses based on the market demands. This will be achieved with minimal cost impact with less capital expenditure.

 

Of apps : when youdeploy your products in the cloud, scaling of the apps would be handled through cloud services.

 

Reliability

Cloud will offer your reliability in terms of always-on or always-available infrastructure. Of course, this is based on the assumption that the cloud service providers would have mechanisms to ensure the reliability through.

 

Rely upon the best practices

Most of the times whatever we are doing has been done by someone else somewhere. And, in somewhat better manner. So, cloud computing will help you in capitalizing these learnings.

 

Different cost models

You can have utility-based payment model where you pay as per the usage through some kind of transaction based or subscription-based model.

 

 

 

What will be the probable issues faced in the propagation of cloud computing in the computing world?

 

1) Reliability & SLAs: How reliable would they be? It depends on the SLAs defined by the cloud computing providers.

2) Security : What about the data security? This will be critical for apps in Healthcare/financial domain. The key question is, will a user be comfortable with his bank account data sitting in a network that is publicly accessible?

3) Legal obligations : there would be certain contractual and legal obligations about having apps and data in the cloud. Will you feel comfortable if your competitior’s app runs from the same cloud?

 

All this theory is fine. But are there any *real* examples that use cloud computing?

 

Yes, they are:

 

1) Google : to quote from here

Google has an initial edge in cloud computing precisely because of its need to produce instant, accurate results for millions of incoming search inquries every day, parsing through the terabytes of Internet data cached on its servers. Google's approach has been to design and manufacture hundreds of thousands of its own servers from commodity components, connecting relatively inexpensive processors in parallel to create an immensely powerful, scalable system. Google Apps, Maps and Gmail are all based in the cloud.

 

2) Amazon : Amazon runs on cloud as I read somewhere. I still didn’t find any site/source actually mentioning this clearly.

 

3) Some Enterprise apps running on cloud offering services infrastructure like Google App Engine, Amazon EC2. They are not listed anywhere by any of the cloud computing players. :)

 

Which are the key players in the cloud computing world?

 

1) MS Azure

2) Amazon EC2

3) Google App Engine

4) Appistry : http://www.appistry.com/

 

So, how does the future look? Cloudy? We’ll get to know it soon..

 

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead

 

Tuesday, April 28, 2009

LTS vs. EF

Scope -

The best way to examine the two products is to look at some of the differences in the functionality that they support.

LINQ to SQL -

- The goal in the design of LTS is clearly Rapid Development, where your database for the most part is Mirrored In Your Domain Model (that is, Table per Class), and foreign keys are exposed as strongly typed relationships.

- According to MSDN, LTS is “designed to provide Strongly Typed LINQ Access for rapidly developed applications across the Microsoft SQL Server family of databases.

- LTS is best suited for Rapid Application Development specifically against a Microsoft SQL Server database. This message is both espoused by Microsoft.

- LTS generates a very simple ORM, as compared to other ORM solutions that provide you with a plethora of properties and options.

- Along with the simple UI, LTS also offers Table per Hierarchy inheritance, the most straightforward form of inheritance in ORM.

- LTS is geared specifically at direct 1:1 mapping of .NET classes to SQL Server database tables.This is animportant point about LTS that supports the assertion that LTS is ideal for simple applications.

- LTS really supports only a direct mapping of your database schema to your classes and is limited in some of its functionality.

These all excludes LTS from many enterprise environments because of its limited out-of-the-box support for complex scenarios.

Entity Framework -

Alternatively, MSDN classifies EF as “designed to provide Strongly Typed LINQ Access for applications requiring a More Flexible Object-Relational Mapping, across Microsoft SQL Server and third-party databases.”

- Additionally, EF is classified as the Enterprise Solution because it separates its metadata into Well-Defined Architectural Layers (Conceptual, Mapping, Schema). EF uses a Complex Mapping Algorithm to separate the metadata into layers (that is, Conceptual, Storage, Mapping), which enables you to model your domain based on entities vs. database tables. For example, in EF you can map a single entity to one or more tables or views, or you can map multiple entities to the same table or view.

- These layers are well suited for a large enterprise because they provide you with Loose Coupling And Fine-Grained Control over the conceptual and physical models.

- The important aspect that demonstrates EF’s ability to be an enterprise solution is its support for inheritance. Unlike LTS, which supports simple inheritance only in the form of Table per Hierarchy, EF supports all the core ORM inheritance patterns. These patterns include Table per Hierarchy, Table per Class, and Table per Concrete Class.

- Another reason why EF has rock star status is because of its support for Complex Composite Types and Many-To-Many Relationships.

- Additionally, where EF has LINQ to Entities, which is similar in design to LTS, EF also has Entity Sql and a Strong Provider Model.

- Additionally, EF is approaching Provider Independence and has overall momentum in the community, things that LTS is missing.

- In addition, whereas LTS is designed only as LINQ to SQL, EF presents many other constructs (for example, ESQL and provider independence) as well as LINQ to Entities.


Thanks & Regards,

Arun Manglick Senior Tech Lead




LINQ to DataSet

Hi,

 

-          LINQ to DataSet is not intended to replace ADO.NET 2.0 code.

-          Rather, the LINQ to DataSet code sits on top of and works in conjunction with the ADO.NET 2.0 code and might be useful in scenarios where data is coming from multiple data providers, is querying locally, and is performing reporting and analysis.

 

Applicability – 

 

-          LINQ for data is supported in two basic ways. You can write LINQ to DataSet code that provides for querying tables in DataSets, and you can define entities and use LINQ to SQL to query entities—classes that map to tables.

-          Consider using LINQ to DataSets only if you are incorporating LINQ into existing code that uses ADO.NET DataSets and tables.

-          If you are writing new code, consider using LINQ to SQL and object-relational maps entities mapped to SQL tables.

 

-          Since a DataSet can be obtained using normal ADO.NET SQL queries, LINQ to DataSet allows LINQ queries over any database that can be queried with ADO.NET. This provides a far more dynamic database-querying interface than LINQ to SQL.

 

 

 

-          Support for DataSets with LINQ is exposed primarily through the DataRowExtensions and the DataTableExtensions classes.

-          DataRowExtensions introduces the extension methods Field and SetField, and the 

-          DataTableExtensions class introduces the extension methods AsDataView, AsEnumerable, and CopyToDataTable.

 

-          Table extension methods yield a queryable sequence.

-          Row extensions provide access to field-level data.

 

Table Extension –

 

 

DataSet data = new DataSet();

using(SqlConnection connection = new SqlConnection(connectionString))

{

            connection.Open();

            SqlCommand command =  new SqlCommand(“SELECT * FROM Suppliers”, connection);

            command.CommandType = CommandType.Text;

            SqlDataAdapter adapter = new SqlDataAdapter(command);

            adapter.Fill(data, “Suppliers”);

}

 

DataTable supplierTable = data.Tables[“Suppliers”];

 

IEnumerable<DataRow> suppliers = from supplier in supplierTable.AsEnumerable()

                                                    select supplier;

 

foreach(DataRow row in suppliers)

{

   Console.WriteLine(row.Field<string>(“CompanyName”));  

}

 

 

 

Row Extension –

 

 

IEnumerable<DataRow> details = from detail in detailTable.AsEnumerable()

where detail.Field<float>(“Discount”) > 0.10f

select detail;

 

foreach(DataRow row in details)

{

 Console.WriteLine(row.Field<int>(“OrderID”));

}

 

 

 

 

Defining a Join with DataSets : Equi-Join

 

const string sql = “SELECT * FROM Orders;” “SELECT * FROM [Order Details];”;

DataSet data = new DataSet();

using(SqlConnection connection = new SqlConnection(connectionString))

{

   connection.Open();

   SqlCommand command = new SqlCommand(sql, connection);

   command.CommandType = CommandType.Text;

   SqlDataAdapter adapter = new SqlDataAdapter(command);

   adapter.Fill(data);

}

 

DataTable orders = data.Tables[0];

DataTable orderDetails = data.Tables[1];

 

var orderResults = from order in orders.AsEnumerable()

    join detail in orderDetails.AsEnumerable()

   on order.Field<int>(“OrderID”)

   equals detail.Field<int>(“OrderID”)

  select new {

  OrderID=order.Field<int>(“OrderID”),

  CustomerID=order.Field<string>(“CustomerID”),

  ProductID=detail.Field<int>(“ProductID”),

  UnitPrice=detail.Field<decimal>(“UnitPrice”),

  Quantity=detail.Field<Int16>(“Quantity”),

  Discount=detail.Field<float>(“Discount”)};

 

 

Console.WriteLine(“Orders & Details”);

foreach(var result in orderResults)

{

  Console.WriteLine(“Order ID: {0}”, result.OrderID);

  Console.WriteLine(“Customer ID: {0}”, result.CustomerID);

}

 

 

 

Another approach –

 

 

var productsFromMultipleSuppliers

 

from product1 in products.AsEnumerable()

from product2 in products.AsEnumerable()

where product1.Field<string>(“ProductName”) == product2.Field<string>(“ProductName”)

&& product1.Field<int>(“SupplierID”) != product2.Field<int>(“SupplierID”)

select new

{

   ProductName = product1.Field<string>(“ProductName”),

   Supplier1 = product1.Field<int>(“SupplierID”),

   Supplier1Price = product1.Field<decimal>(“UnitPrice”),

   Supplier2 = product2.Field<int>(“SupplierID”),

   Supplier2Price = product1.Field<decimal>(“UnitPrice”)

};

 

 

 

 

Sorting Against DataTables

 

 

IEnumerable<DataRow> details = from detail in detailTable.AsEnumerable()

where detail.Field<float>(“Discount”) > 0.10f

orderby detail.Field<int>(“ProductID”)

select detail;

 

 

 

Defining a Left Join and a Word about Right Joins

 

-          The left join and right join are not supported by keywords in LINQ.

-          However, the left join (or left outer join) can be constructed by flattening the group join with the DefaultIfEmpty extension method.

 

-          A left outer join is constructed with two sequences: a Group Join and the DefaultIfEmpty extension method.

-          In short, the left join needs an empty child DataRow in the event there is no matching child row.

-          This required DataTable.NewRow method —in select new—to use nullable types for child data.

-          This is necessary because empty new rows for missing children will be initialized with nulls.

 

 

DataSet dataset = new DataSet();

using (SqlConnection connection = new SqlConnection(connectionString))

{

   connection.Open();

  const string SQL = “SELECT * FROM CUSTOMERS;SELECT * FROM ORDERS”;

   SqlDataAdapter adapter = new SqlDataAdapter(SQL, connection);

   adapter.Fill(dataset);

}

 

DataTable customers = dataset.Tables[0];

DataTable orders = dataset.Tables[1];

 

var customersWithoutOrders =   from customer in customers.AsEnumerable()

join order in orders.AsEnumerable()

on customer.Field<string>(“CustomerID”) equals

order.Field<string>(“CustomerID”) into children

from child in children.DefaultIfEmpty(orders.NewRow())

let OrderID = child.Field<int?>(“OrderID”)

where OrderID == null

select new

{

Company=customer.Field<string>(“CompanyName”),

City=customer.Field<string>(“City”),

Phone=customer.Field<string>(“Phone”),

OrderID=child.Field<int?>(“OrderID”),

OrderDate=child.Field<DateTime?>(“OrderDate”),

RequiredDate=child.Field<DateTime?>(“RequiredDate”),

ShipCity=child.Field<string>(“ShipCity”)

};

 

 

foreach(var item in customersWithoutOrders)

{

  Console.WriteLine(“Company: {0}”, item.Company);

}

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

DataRow Set Operators

 

-          There are a handful of Standard Query Operators that exist for the purpose of making sequence set-type comparisons.

-          They are - Distinct, Except, Intersect, Union, and SequenceEqual operators

 

-          For each of these set-type operators, determining sequence element equality is necessary to perform the appropriate set operation. These operators perform Element Comparisons by calling the GetHashCode and Equals methods on the elements. For a DataRow, this results in a Reference Comparison, which is not the desired behavior. This will result in the incorrect determination of element equality, thereby causing the operators to return erroneous results. Because of this, each of these operators has an additional prototype that is not requird  in the LINQ to Objects.

 

-          This additional prototype allows an IEqualityComparer object to be provided as an argument. Conveniently, a comparer object has been provided for us specifically for these versions of the operators - System.Data.DataRowComparer.Default

 

Distinct operator -

 

public static IEnumerable<T> Distinct<T> (this IEnumerable<T> source, IEqualityComparer<T> comparer);

 

-          Samples

 

DataTable dt = GetDataTable(students);

IEnumerable<DataRow> distinct = dt.AsEnumerable().Distinct(DataRowComparer.Default);

 

 

 

     Output –

 

Before

After

 

Id Name

=============================

1   Joe Rattz

6   Ulyses Hutchens

19 Bob Tanko

45  Erin Doutensal

1   Joe Rattz

12  Bob Mapplethorpe

17  Anthony Adams

32  Dignan Stephens

 

 

Id Name

=============================

1   Joe Rattz

6   Ulyses Hutchens

19 Bob Tanko

45  Erin Doutensal

1   Joe Rattz

12  Bob Mapplethorpe

17  Anthony Adams

32  Dignan Stephens

 

 

 

Except operator -

           

public static IEnumerable<T> Except<T> (this IEnumerable<T> first, IEnumerable<T> second, IEqualityComparer<T> comparer);

 

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

 

Student[] students2 = {

new Student { Id = 5, Name = "Abe Henry" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 29, Name = "Future Man" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

 

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

DataTable dt2 = GetDataTable(students2);

IEnumerable<DataRow> seq2 = dt2.AsEnumerable();

 

IEnumerable<DataRow> except = seq1.Except(seq2, System.Data.DataRowComparer.Default);

 

 

 

Id Name

=============================

1   Joe Rattz

13 Stacy Sinclair

 

 

 

 

Intersect operator -

           

public static IEnumerable<T> Intersect<T> (this IEnumerable<T> first, IEnumerable<T> second, IEqualityComparer<T> comparer);

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

DataTable dt2 = GetDataTable(students2);

IEnumerable<DataRow> seq2 = dt2.AsEnumerable();

 

IEnumerable<DataRow> intersect= seq1.Intersect(seq2, System.Data.DataRowComparer.Default);

 

 

 

Id Name

=============================

7 Anthony Adams

72 Dignan Stephens

 

 

 

Union operator -

           

public static IEnumerable<T> Union<T> (this IEnumerable<T> first, IEnumerable<T> second, IEqualityComparer<T> comparer);

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

DataTable dt2 = GetDataTable(students2);

IEnumerable<DataRow> seq2 = dt2.AsEnumerable();

 

IEnumerable<DataRow> union= seq1.Union(seq2, System.Data.DataRowComparer.Default);

 

 

 

Id Name

=============================

1   Joe Rattz

7   Anthony Adams

13 Stacy Sinclair

72 Dignan Stephens

5   Abe Henry

29 Future Man

 

Note – No repeative records

 

 

 

SequenceEqual operator -

           

-          The SequenceEqual operator compares two sequences of DataRow objects to determine whether they are equal.

-          It enumerates two source sequences, comparing the corresponding DataRow objects. If the two source sequences have the Same Number Of Records, and if all the Corresponding Datarow Objects Are Equal, true is returned. Otherwise, the two sequences are not equal, and false is returned.

 

public static IEnumerable<T> SequenceEqual<T> (this IEnumerable<T> first, IEnumerable<T> second, IEqualityComparer<T> comparer);

 

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

 

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

DataTable dt2 = GetDataTable(students2);

IEnumerable<DataRow> seq2 = dt2.AsEnumerable();

 

bool equal = seq1.SequenceEqual(seq2, System.Data.DataRowComparer.Default);

 

 

 

Output - True

 

 

 

 

DataRow Field Operators

 

Field<T>

 

-          With DataRow objects, we have a problem. Their DataColumn values do not get compared properly for equality when they are accessed with the DataRow object’s indexer if the Column Is A Value-Type.

-          The reason is that because the column’s data type could be any type, the indexer returns an object of type System.Object. This means that if a column is of type int, it is a value-type, and it must get packaged into an object of type Object. This boxing is where the problem lies.

 

 

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

 

StudentClass[] classDesignations = {

new StudentClass { Id = 1, Class = "Sophmore" },

new StudentClass { Id = 7, Class = "Freshman" },

new StudentClass { Id = 13, Class = "Graduate" },

new StudentClass { Id = 72, Class = "Senior" }

};

 

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

DataTable dt2 = GetDataTable2(classDesignations);

IEnumerable<DataRow> seq2 = dt2.AsEnumerable();

 

string anthonysClass =  (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

from c in seq2

where c["Id"] == s["Id"]

select (string)c["Class"]).SingleOrDefault<string>();

 

Console.WriteLine("Anthony's Class is: {0}", anthonysClass != null ? anthonysClass : "null");3

 

 

Output - Anthony's Class is: null

 

Reason - Since the column value data types are strings, they will get boxed to System.Object, which means there will be a problem determining equality.

 

 

 

-          Lets solve the Boxing Problem.

 

where (int)c["Id"] == (int) s["Id"]

 

 

Output - Anthony's Class is: Freshman

 

Problem Solved – But ??

 

-          So that solves the boxing problem. However, there is still one other problem.

-          What if in case if the value is Null - DBNull.Value. Then  if that column’s value is DBNull.Value, casting it to another data type will Throw An Exception.

 

 

 

-          Lets solve the Both Boxing & Null Problem.

-          Perfect Soluion - Here comes the usage of Field<T> and SetField<T> operators  - Provide solution to both of these problems— Boxed Value Comparisons and Null Handling.

 

where c.Field<int>("Id") == s.Field<int>("Id")

 

 

Output - Anthony's Class is: Freshman

 

 

 

-          Syntaxs-

 

 

1

public static T Field ( this DataRow first, System.Data.DataColumn column, System.Data.DataRowVersion version);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>(dt1.Columns[0], DataRowVersion.Current)).Single<int>();

 

Console.WriteLine("Anthony's Id retrieved with prototype 1 is: {0}", id);

 

2

public static T Field ( this DataRow first, string columnName, System.Data.DataRowVersion version);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>("Id", DataRowVersion.Current)).Single<int>();

Console.WriteLine("Anthony's Id retrieved with prototype 2 is: {0}", id);

 

3

public static T Field (this DataRow first, int ordinal, System.Data.DataRowVersion version);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>(0, DataRowVersion.Current)).Single<int>();

Console.WriteLine("Anthony's Id retrieved with prototype 3 is: {0}", id);

 

4

public static T Field ( this DataRow first, System.Data.DataColumn column);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>(dt1.Columns[0])).Single<int>();

Console.WriteLine("Anthony's Id retrieved with prototype 4 is: {0}", id);

 

5

public static T Field (this DataRow first, string columnName);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>("Id")).Single<int>();

Console.WriteLine("Anthony's Id retrieved with prototype 5 is: {0}", id);

 

6

public static T Field (this DataRow first, int ordinal);

id = (from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s.Field<int>(0)).Single<int>();

Console.WriteLine("Anthony's Id retrieved with prototype 6 is: {0}", id);

 

 

           

Input

Output

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

int id;

Anthony's Id retrieved with prototype 1 is: 7

Anthony's Id retrieved with prototype 2 is: 7

Anthony's Id retrieved with prototype 3 is: 7

Anthony's Id retrieved with prototype 4 is: 7

Anthony's Id retrieved with prototype 5 is: 7

Anthony's Id retrieved with prototype 6 is: 7

 

 

SetField<T>

 

-          Similar to the Field operator this also solve the Both Boxing & Null Problem.

 

-          Syntaxs-

 

 

1

public static void SetField ( this DataRow first, System.Data.DataColumn column, T value);

(from s in seq1

where s.Field<string>("Name") == "Anthony Adams"

select s).Single<DataRow>().SetField(dt1.Columns[1], "George Oscar Bluth");

 

Console.WriteLine("{0}Results after calling prototype 1:", System.Environment.NewLine);

foreach (DataRow dataRow in seq1)

{

 Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),

 dataRow.Field<string>("Name"));

}

 

2

public static void SetField (this DataRow first, string columnName, T value);

(from s in seq1

where s.Field<string>("Name") == "George Oscar Bluth"

select s).Single<DataRow>().SetField("Name", "Michael Bluth");

 

Console.WriteLine("{0}Results after calling prototype 1:", System.Environment.NewLine);

foreach (DataRow dataRow in seq1)

{

 Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),

 dataRow.Field<string>("Name"));

}

 

3

public static void SetField (this DataRow first, int ordinal, T value);

(from s in seq1

where s.Field<string>("Name") == "Michael Bluth"

select s).Single<DataRow>().SetField("Name", "Tony Wonder");

 

Console.WriteLine("{0}Results after calling prototype 1:", System.Environment.NewLine);

foreach (DataRow dataRow in seq1)

{

 Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),

 dataRow.Field<string>("Name"));

}

 

 

 

           

Input

Output

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

Console.WriteLine("{0}Results before calling any prototype:",System.Environment.NewLine);

foreach (DataRow dataRow in seq1)

{

Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),

dataRow.Field<string>("Name"));

}

Results before calling any prototype:

Student Id = 1 is Joe Rattz

Student Id = 7 is Anthony Adams

Student Id = 13 is Stacy Sinclair

Student Id = 72 is Dignan Stephens

 

Results after calling prototype 1,2 & 3:

Student Id = 1 is Joe Rattz

Student Id = 7 is George Oscar Bluth

Student Id = 13 is Stacy Sinclair

Student Id = 72 is Dignan Stephens

 

Student Id = 7 is Michael Bluth

Student Id = 7 is Tony Wonder

 

 

 

DataTable Operators

 

-          As the DataRowExtensions introduces the extension methods Field and SetField, the 

-          DataTableExtensions class introduces the extension methods AsDataView, AsEnumerable, and CopyToDataTable [To be discussed here].

 

AsEnumerable

 

-          AsEnumerable operator specifically for the DataTable class that returns a sequence of DataRow objects.

-          The purpose of this operator is to return a sequence of type IEnumerable<DataRow> from a DataTable object.

 

-          Syntax & Sample

 

public static IEnumerable<DataRow> AsEnumerable ( this DataTable source)

 

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

 

DataTable dt1 = GetDataTable(students);

IEnumerable<DataRow> seq1 = dt1.AsEnumerable();

 

 

 

CopyToDataTable<DataRow>

 

-          AsEnumerable allows to get the sequence of DataRow objects from a DataTable.

-          CopyToDataTable does the reverse operation. It allows to return the retrieved & modified DataRow objects back into a DataTable.

 

-          Syntax 1

 

 

public static DataTable CopyToDataTable<T> (this IEnumerable<T> source) where T : DataRow;

 

DataTable dt1 = GetDataTable(students);

Console.WriteLine("Original DataTable:");

foreach (DataRow dataRow in dt1.AsEnumerable())

{

 Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),   dataRow.Field<string>("Name"));

}

 

(from s in dt1.AsEnumerable()

where s.Field<string>("Name") == "Anthony Adams"

select s).Single<DataRow>().SetField("Name", "George Oscar Bluth");

 

DataTable newTable = dt1.AsEnumerable().CopyToDataTable();

 

Console.WriteLine("{0}New DataTable:", System.Environment.NewLine);

foreach (DataRow dataRow in newTable.AsEnumerable())

{

 Console.WriteLine("Student Id = {0} is {1}", dataRow.Field<int>("Id"),  dataRow.Field<string>("Name"));

}

 

           

 

Input

Output

Student[] students = {

new Student { Id = 1, Name = "Joe Rattz" },

new Student { Id = 7, Name = "Anthony Adams" },

new Student { Id = 13, Name = "Stacy Sinclair" },

new Student { Id = 72, Name = "Dignan Stephens" }

};

Original DataTable:

Student Id = 1 is Joe Rattz

Student Id = 7 is Anthony Adams

Student Id = 13 is Stacy Sinclair

Student Id = 72 is Dignan Stephens

 

New DataTable:

Student Id = 1 is Joe Rattz

Student Id = 7 is George Oscar Bluth

Student Id = 13 is Stacy Sinclair

Student Id = 72 is Dignan Stephens

 

-          Syntax 2

 

 

public static void CopyToDataTable<T> (this IEnumerable<T> source, DataTable table, LoadOption options ) where T : DataRow;

 

The possible LoadOptions are as –

 

-          PreserveChanges: Only Original Column values should be changed

-          Upsert: Only Current Column values should be changed

-          OverwriteChanges: Both

 

 

 

 

-          To make the CopyToDataTable operator work properly, the destination DataTable object must have the Primary Key Fields. Without specifying primary keys, this prototype will result in appending all the records from the source DataTable to the destination DataTable.

-           

 

DataTable dt1 = GetDataTable(students);

DataTable newTable = dt1.AsEnumerable().CopyToDataTable();

 

(from s in dt1.AsEnumerable()

where s.Field<string>("Name") == "Anthony Adams"

select s).Single<DataRow>().SetField("Name", "George Oscar Bluth");

 

dt1.AsEnumerable().CopyToDataTable(newTable, LoadOption.Upsert);

 

foreach (DataRow dataRow in newTable.AsEnumerable())

{

Console.WriteLine("Student Id = {0} : original {1} : current {2}",

dataRow.Field<int>("Id"),

dataRow.HasVersion(DataRowVersion.Original) ?

dataRow.Field<string>("Name", DataRowVersion.Original) : "-does not exist-",

dataRow.Field<string>("Name", DataRowVersion.Current));

}

 

Before upserting DataTable:

Student Id = 1 : original Joe Rattz : current Joe Rattz

Student Id = 7 : original Anthony Adams : current Anthony Adams

Student Id = 13 : original Stacy Sinclair : current Stacy Sinclair

Student Id = 72 : original Dignan Stephens : current Dignan Stephens

 

After upserting DataTable:

Student Id = 1 : original Joe Rattz : current Joe Rattz

Student Id = 7 : original Anthony Adams : current Anthony Adams

Student Id = 13 : original Stacy Sinclair : current Stacy Sinclair

Student Id = 72 : original Dignan Stephens : current Dignan Stephens

Student Id = 1 : original -does not exist- : current Joe Rattz

Student Id = 7 : original -does not exist- : current George Oscar Bluth

Student Id = 13 : original -does not exist- : current Stacy Sinclair

Student Id = 72 : original -does not exist- : current Dignan Stephens

 

Note – As there was no Primary key the changed rows have been appended to the original table.

 

 

DataTable dt1 = GetDataTable(students);

DataTable newTable = dt1.AsEnumerable().CopyToDataTable();

newTable.PrimaryKey = new DataColumn[] { newTable.Columns[0] };

 

Rest is same as above

 

Before upserting DataTable:

Student Id = 1 : original Joe Rattz : current Joe Rattz

Student Id = 7 : original Anthony Adams : current Anthony Adams

Student Id = 13 : original Stacy Sinclair : current Stacy Sinclair

Student Id = 72 : original Dignan Stephens : current Dignan Stephens

 

After upserting DataTable:

Student Id = 1 : original Joe Rattz : current Joe Rattz

Student Id = 7 : original Anthony Adams : current Anthony Adams

Student Id = 13 : original Stacy Sinclair : current Stacy Sinclair

Student Id = 72 : original Dignan Stephens : current Dignan Stephens

 

 

 

 

Hope this helps.

 

Thanks & Regards,

Arun Manglick || Senior Tech Lead