Quantcast
Channel: Kendo UI Blogs
Viewing all 181 articles
Browse latest View live

Kendo UI at SXSW 2013. Free TechCab rides, Open Web party tickets, and free hotel rooms

$
0
0

We are packing our bags to head to Austin for SXSW Interactive in less than two weeks, and we can’t wait! In years past, we've set up a booth in the expo center, but this year, we opted to do something more fun and above all, useful...


Handling CRUD With The Kendo UI JSP Wrappers

$
0
0

This article is going to wrap up what you need to know to get started with the JSP Wrappers for Kendo UI. Let's do just a quick review. Part 1 (Read It) Created a basic JSP Dynamic Web Application Added a data access layer…

Exporting the Kendo UI Grid Data to Excel

$
0
0

On my current project, I have been asked to provide the users with the ability to export the contents of any grid to Excel. To do this, I decided to use the XML SDK 2.0 for Microsoft Office to create the Excel spreadsheet. I’ve used the …

Who Is This JaSON Guy, Anyway?

$
0
0

If you've done any web development in recent years, you've no doubt heard about JSON (typically pronounced like the name "Jason") and have likely used it in your projects whether or not you realize it. The acronym stands for JavaScript Object Notation, and as you can guess, it deals with describing JavaScript objects. But what is it, and why does it matter so much to us as web developers?

Squares and rectangles

There's a mathematical statement that all squares are rectangles, but not all rectangles are squares. The same principle applies to JSON and JavaScript. All JSON documents are valid JavaScript objects. However, not all JavaScript objects are valid JSON documents.

JSON is the JavaScript equivalent of an XML document, basically. It's a way of serializing an object's attributes - it's data - in to a document format that can be transported through the interwebs and used by browsers, back-end systems, and everything else in between and around. JSON just happens to use JavaScript as it's document format.

Note that the language I'm using is critical, as well. It's not a "JSON object", and it's not a "JavaScript Document". I am explicitly saying "JSON document" because that's what it is - a document in the same way XML is a document.

Versatility

JSON is quite the versatile little document format. Developers tend to use JSON as a direct replacement for XML based AJAX calls - making the name AJAX ("Asynchronous JavaScript And XML") a bit of a misnomer - but who wants to says "AJAJ"? ... or "AJAJs"?

So-called "document" or "document-oriented" database systems like MongoDB, CouchDB and RavenDB (to name only a few) use JSON documents as their primary format for data storage and retrieval. There are countless other uses for JSON documents as well, including configuration files for package managers like NPM, build tools like GruntJS and more.

It's the little-document format that could, but it can make or break your applications if you get the document format wrong with something as simple as leaving an extra , in place.

The document format

At it's core, JSON documents are a description of data with key-value pairs where the key is always a quoted string, and the value can be any of a number of things: strings, numbers, boolean values, other JSON objects, and more. JSON is a strict document format, too. One small mistake (like the extra comma I already mentioned) and your document won't be valid.

It can be rather easy to get lost in the JSON format when you're first starting. Being a strict format means there are a lot of rules to follow. Fortunately there are a few rules you can stick to initialy, and you will generally be safe in your documents. Eventually you'll start running in to the edge cases and invalid document entries, but hopefully by that time you'll have a better understanding of JSON as a document format and be able to figure out the problem quickly.

The core list of rules that I keep in mind when working with JSON is as follows:

  • Open and close { } curly braces to denote the document
  • Use "key": value pairs to describe everything
  • All "keys" are quoted with double quotes (yes, this is a strict double-quote requirement)
  • Values cannot be code / functions
  • No trailing ,

Here's an example of a valid JSON document:

Sample JSON Document

{
  "foo": "bar",
  "baz": 1,
  "quux": true,
  "widget": {
    "wombat": true,
    "that thing": "I sent you"
  }
}

It's easy to think that this is a JavaScript object literal - because it is. You can copy & paste this in to your JavaScript code, assign it to a variable, and start accessing attributes from it. But remember that not all JavaScript is valid JSON.

Not a JSON Document

{
  foo: "bar",
  baz: 1,
  "quux": true,
  'widget': {
    "wombat": true,
    "that thing": "I sent you"
  },
}

This example is not a valid JSON document, even though it's a perfectly valid JavaScript object. There are three separate problems with this object definition, that prevent it from being a JSON document:

  1. Unquoted keys
  2. A Single-quote key
  3. A trailing comma after widget

Many projects that support JSON documents will be rather forgiving of this object trying to masquerade as JSON, but most won't. For your own safety and sanity, then, it is beter to always be strict in your use of JSON than to have to hunt down strange bugs somewhere in your software systems.

(For a complete listing of all possible values for JSON documents, see the JSON standard website.)

Working with JSON for fun and profit

The good news, even with all the idiosyncracies of getting a JSON document formatted exactly right, is that we don't have to deal with it directly most of the time. Browser based tools that use JSON, widgets and control suites like Kendo UI, back-end servers and everythign in between have all automated this for us under the guidance of the JSON standard.

There are two primary methods that we need to be aware of when working with JSON, and every major browser supports these methods at this point.

  • JSON.stringify(object)
  • (object).toJSON

If you happen to be supporting old browsers, though, don't worry too much. You can grab the json2.js library from Douglas Crockford - the father of the JSON standard - and provide the same capabilities in old browswers. Or, if you're using Kendo UI, you can use the built in kendo.stringify method. It provides the same shim that json2.js does, but it's built in to Kendo UI so you don't have to grab yet another file.

JSON.stringify

The first method, JSON.stringify is responsible for converting an object in to a JSON document. It reads through the object using the process set forth by the JSON standard and creates all of the appropriate {"key": "value"} pairs in the resulting documenting. This is the method that most tool set, like Kendo UI, call when they are converting an object to a JSON document.

It's easy to demonstrate JSON.stringify as well. Open a JavaScript console in FireBug, Firefox developer tools, the Chrome developer tools, or any other browser and type the following:

JSON.stringify({a: "b"})

The result should be:

{"a": "b"}

(though your specific browser console may return this wrapped in an additional layer of " quotes)

You can try this with any object and see the results quite easily. The stringify method will serialize any JavaScript object, though it may not always produce the results that you want or expect.

(object).toJSON

There will inevitably be times where you need some data to exist on an object in the browser, but don't want that data to be sent back to a server when it is converted in to a JSON document for an AJAX call. You may also need to inject some data, or pre-process some data to make the server happy. Well, good news again! We can very easily customize the data that is produced by the stringify method when passing in our objects. We only need to provide a toJSON method on the object in question.

Open your favorite JavaScript console again, and type (copy & paste) this:

Override toJSON

var obj = { foo: "bar", baz: "quux", toJSON: function(){ return {a: "b"}}};
JSON.stringify(obj);

The result will be the same {"a": "b"} that we saw before! Even though this object has a "foo" and "bar" attribute, and does not directly contain an "a" attribute, we have told the JSON.stringify process to use a set of data that is different from what the object actually has available. When JSON.stringify is called, it looks for the presence of a .toJSON method and if that method exists, it calls that method instead of running the standard process to serialize the object.

There's one important and interesting note here, as well. Did you notice that the toJSON method I provided does not return a valid JSON document? This is because the toJSON method only has to return a valid JavaScript object that will be converted to a JSON document. The stringify method will handle the output of the toJSON method instead of the original target object, resulting in the data we want to be serialized, being serialized.

There are a lot more tricks that the JSON standard and the stringify and toJSON methods can perform, though. For a good read on some of these tricks and trip-ups, check out Jim Cowart's blog post on What You Might Not Know About JSON.stringify().

JSON is your friend - play nice

Like your co-worker, Jason, that you enjoy working with and talking with at lunch, JSON documents are most definitely the friend of a web developer. I encourage you to dig a little deeper, learn more about the JSON standard and most of all - have fun with it. Play with the toJSON method, especially, and see what kind of fun tricks you can play on your server API by sending it data that your objects don't actually contain. Just stick a .toJSON() method directly on your object before you send it down through the AJAX call and see what happens. You might be surprised at how powerful this little bit of knowledge actually is.

About the Author
is a Developer Advocate for Kendo UI, a developer, speaker, trainer, screen-caster and much more. He's been slinging code since the late 80’s and doing it professionally since the mid 90's. These days, Derick spends his time primarily writing javascript with back-end languages of all types, including Ruby, NodeJS, .NET and more. Derick blogs at DerickBailey.LosTechies.com, produces screencasts at WatchMeCode.net, tweets as @derickbailey and provides support and assistance for JavaScript, BackboneJS, MarionetteJS and much more around the web.

Kendo UI Spring 2013 Release, Keynote This Wednesday!

$
0
0
The next major Kendo UI release is almost here! This Wednesday,March 20th is the big day, and the online keynote is where you need to be to get your first hosted overview of what's new. Plus, we're giving away some cool prizes!

Kendo UI Spring 2013 Release Today!

$
0
0
It’s Kendo UI launch day, and I hope you’re as excited to get your hands on the new Kendo UI bits as we are to share them with you! We’ve been hard at work since the Q3 2012 release to bring you the most jam-packed release since our initial launch over a year ago!

Q1 2013 Release Resources, Keynote Recording, Winners

$
0
0
To help you on your journey exploring the latest release, this post compiles some of the important follow-up resource links for you, along with the keynote recording and list of keynote raffle winners. Bookmark this post and refer back to it if you need to find any of these important links later!

Introducing JayData And Kendo UI

$
0
0

JayData is a JavaScript data manager library that provides a friendly, unified developer experience over a number of different online and offline data APIs and data sources like IndexedDB, WebSQL or OData. For local databases, JayData provides automatic database creation and schema management (keys, indices). For OData services, JayData supports the translation of the service metadata document into an intelligent client environment where you can interact with data naturally using JavaScript.

Why use JayData with Kendo UI

JayData lets you to connect the rich feature-set of Kendo UI controls and the core framework with a number of popular data store options around you. With zero or minimal plumbing you can plug in CRUD capable connectivity to local data APIs like IndexedDB or WebSQL, and online endpoints that support OData or the ASP.NET WebAPI. Kendo UI with JayData just works.

Figure1: This is a sample Kendo UI Grid that manages tasks.  Data can be store in many different places.

Figure 2: Data stored in WebSQL 

Figure 3: Data stored in IndexedDB

Figure 4: Data stored on the server, published with OData

Getting Started With JayData And Kendo UI

You need to include the following scripts to unleash the magic. 

<script src="http://code.jquery.com/jquery-1.8.2.min.js"></script><script src="http://cdn.kendostatic.com/2012.3.1315/js/kendo.all.min.js"></script><script src="http://include.jaydata.org/datajs-1.0.3.js"></script><script src="http://include.jaydata.org/jaydata.js"></script><script src="http://include.jaydata.org/jaydatamodules/kendo.js"></script><link href="http://cdn.kendostatic.com/2012.3.1315/styles/kendo.common.min.css" rel="stylesheet" type="text/css" /><link href="http://cdn.kendostatic.com/2012.3.1315/styles/kendo.default.min.css" rel="stylesheet" type="text/css" />

Make sure you have a valid Kendo UI license, even when using the CDN.

Define a simple model

To get started interacting with a local data store using JayData, we first need to create a model. The simplest way to do that in JayData is using the $data.define method. This accepts a model name and a model definition and returns the JavaScript class function for the specified entity type. These types are how JayData interacts with the database.

Our example Task model in the beginning contains only two simple fields. None of them are mandatory.

var Task = $data.define("Task", {
    Todo: String,
    Completed: Boolean
});

Interactive Sample

That is all to it. You can start storing Task data locally. In JayData every model type has a default local database configured - either WebSQL or IndexedDB depending on device / browser capabilities.

Also you can set this default store to be an OData feed or an ASP.NET WebAPI controller. You might want to read more on the ItemStore API.

Beyond JavaScript primitive types such as Boolean, String, Date and Number, you can use a wide set of more complex field types like Array, $data.GeographyPoint or another entity type to create a complex field or a foreign entity relation. These are out of the scope of this post.

Auto CRUD the local database with a Kendo Grid

JayData has multiple integration points for Kendo UI. Arguably the most important one is the ability to provide you with Kendo UI DataSource instances from a JayData entity type. To get a fully configured DataSource instance that retrieves items and creates new rows in the default local store of the given entity type simply, use type.asKendoDataSource()

<body><div id="grid"></div><script>
        var Task = $data.define("Task", {
            Todo: String, 
            Completed: Boolean
        });
        $('#grid').kendoGrid({
            dataSource: Task.asKendoDataSource(),
            filterable: true,
            sortable: true,
            pageable: true,
            editable: 'popup',
            toolbar: ['create', 'save']
        });</script></body>

Interactive Sample

CRUD the local database with MVVM and a ListView

var Task = $data.define("Task", {
    Todo: { type: String, required: true },
    Completed: Boolean
});

var viewModel = kendo.observable({
    taskSource: Task.asKendoDataSource(),
    addNew: function () { 
     $('#listView').getKendoListView().add();
    },
});

kendo.bind($("#container"), viewModel);

Interactive Sample

To learn more about the Kendo UI DataSource, read this fine post from the Kendo team.

Overriding The Default Local Data Store

By default, JayData configures a local database behind every Entity type. This default store is using the so called local provider, which decides what the backing store should be based on device / browser capabilities. You can override this manually by selecting theprovider property in the type configuration to webSql or indexedDb.

WebSQL

//This is the default option if the client supports WebSQL. 
$('#grid').kendoGrid({
    dataSource: Task.asKendoDataSource({ provider: 'webSql', databaseName:'TaskDB' }),
    ...
}); 

As with WebSQL, JayData handles the duties of creating or updating the local database schema (collections, indices) for IndexedDB.

IndexDB

$('#grid').kendoGrid({
    dataSource: Task.asKendoDataSource({ provider: 'indexedDb', databaseName:'TaskDB' }),
    ...
});

Working With OData And WebAPI

You can also have the provider be an OData service or WebAPI controller.

OData

As with WebSQL or IndexedDB, JayData supports the full set of CRUD operations over the OData protocol. Your model definition must be compatible with the schema defined by the service metadata. The url property specifies the endpoint of the OData service.

$('#grid').kendoGrid({
    dataSource: Task.asKendoDataSource({ provider: 'oData', url: '//your.svc/Tasks'}),
    ...
});

Interactive Sample

With OData, you can also have the JayData library infer model types from the service metadata. This can spare you a lot of work; especially when it comes to large, or constantly changing models.

You can create a read/write OData service with WCF Data Services and Entity Framework, or you could create an OData service with just JavaScript and MongoDB. You might even choose to build your own OData service in the cloud using JayStack since it has a super friendly and easy to use interface.

ASP.NET WebAPI

You can use the default ASP.NET WebAPI Controllers to create a simple, query capable store API that persists its data with Entity Framework.

Then you just need to specify the provider as WebAPI.

$('#grid').kendoGrid({
    dataSource: Task.asKendoDataSource({ provider: 'webApi', url: '//api/Tasks'}),
    ...
});

 Taking Control Of Fields

You might have noticed in the local database CRUD interactive sample that an Id column is also present in the listings. This is because entities must have some data that uniquely identifies them, and if you don’t designate the key, JayData will append an auto incrementing integer key for you. The following code defines a string key that is assigned by the user (instead of being an auto increment value). It also makes the Todo field required and assigns a defaultValue to a DueDate field.

var Task = $data.define("Task", {
    Id: { type: String, key: true },
    Todo: { type: String, required: true },
    Completed: Boolean,
    DueDate: { 
        type: Date, 
        defaultValue: new Date() 
    }
});

Interactive Sample

On top of type and required, JayData supports the following field configurations:

  • computed
  • minValue
  • maxValue
  • nullable
  • elementType
  • pattern

Not Just CRUD

On top of creating or editing data, the JayData library combined with the Kendo UI DataSource lets you to filter, sort or page data efficiently – all happening in the local database or on the server.

With WebSQL and IndexedDB this means that the filter statement invoked on a data source instance will actually be compiled into a SQL statement or a cursor operation.

With OData or WebAPI a properly formatted OData query is built and transmitted to the server.

I've setup a working interactive sample so that you can play with this a bit. Try setting some filtering or sorting on the grid. Make sure to watch the console so you can see the requests containing all the filtering/sorting parameters.

Setting DataSource properties

The almighty asKendoDataSource method accepts the standard Kendo UI DataSource options object. If unspecified by the developer, JayData sets the serverFiltering, serverSorting and serverPaging flags to true and sets a default pageSize of 25.

Lets talk about the batch setting for a moment. This controls the behavior of the pending change operations. The default value is false. Setting batch to true will cause the local providers to save the entire change set in one transactions. With the OData provider, the batch setting will switch from solo REST calls to OData $batch protocol providing you better overall completion time when it comes to creating multiple entities. WebAPI does not support batch operations.

Working With Complex Models, Tables and Databases

Beyond very simple cases like a shopping cart or a basic task list, we will need more than just sole entity types. For instance, we might like to have drill down features, master detail listings, drop downs based on domain data, etc. In order to handle these more complex scenarios, we need a container to collect types that are to be stored at the same place. The container holds types similar to the way a database is a container for tables.

In JayData you can define such a container by inheriting from the EntityContext class. We can think of EntityContext classes as client proxies to a local database or a remote data service.

Define a Simple Database Manually

Let’s extend our Task list example to add a categorization feature. This involves defining a Category type, and a Task type with a CategoryId field. Then you need to subclass/inherit from $data.EntityContext and define properties to denote tables.

var Task = $data.define("Task", {
    Todo: String,
    Completed: Boolean,
    CategoryId: Number
})

var Category = $data.define("Category", {
    CategoryName: String
});

var TaskDB = $data.EntityContext.extend("TaskDB", {
    Tasks: { type: $data.EntitySet, elementType: Task },
    Categories: { type: $data.EntitySet, elementType: Category }
});

var database = new TaskDB({ provider: 'local' });
var taskSource = database.Tasks.asKendoDataSource();
var categorySource = database.Categories.asKendoDataSource();

 

With a couple of lines of Kendo UI code, you can have a foreign key field implemented.

Interactive Sample

Meanwhile in the database:

Back to Odata

OData is a fantastic protocol. Not only does it provide standardized ways to access the functionality of the data API, but also exposes metadata in a way that lets the client know about the semantics of the service, like entity types or service function signatures.

In the case of containers, if the OData service has the same origin as the client code or the users's browser supports CORS, JayData provides dynamic metadata processing turning the $metadata document into a fully built client object model. It does all the $data.defines and object extending on your behalf. The story is pretty much the same afterwards…

$data.service("/northwind.svc", function (factory, type) {
    var northwind = factory();
    $('#grid').kendoGrid({
        dataSource: northwind.Products.asKendoDataSource(),
        editable: 'inline',
        pageable: true,
    });
});

Interactive Sample

Please be aware, that CORS only works reliably in the modern browsers.

Working with OData in different origin (without CORS)

As seen in previous sections you can always define any entity model, then direct it to an OData endpoint - and if they at least partially match the CRUD operations will work. With $data.service you can be sure that you are using the correct model as it is generated from the service metadata. But this requires the $metadata document to be accessible from the origin of your app.

If this is not the case you can use the JaySvcUtil which downloads service metadata and generates entity files that you can include in your HTML5 project. From that point JayData will fallback to JSONP. This must be supported on the server and only the GET operations are currently working.

Interactive Sample

Synchronize Online and Offline

Ah, the holy grail of synchronizing your offline datastore with the online one when it becomes available.

In the service init callback we receive a factory method that produces a living data context connected to the OData service. The type parameter is the EntityContext class that we can use to create a local database with the same schema as the online store.

var localDb = new type({ provider: 'local', databaseName: 'localTodoDb' })

A very simple example to have an offline copy of the remote data might look like this:

$data.service("/myDataService.svc", function (factory, type) {
    var northwind = factory();
    var localDB = new type({ provider: 'local', databaseName: 'localCache' });
    $.when(northwind.onReady(), localDB.onReady()).then(function () {
            return northwind
                .Categories
                .toArray(function (categories) {
                    localDB.categories.addMany(categories);
                    return localDB.saveChanges();
                })
        }).then(function () {
            alert("sync is done");
        });
});

 

Note that this example is rather error prone and doesn't scale well, but makes for a good simple example. We have examples for creating a better scaling version with paging and continuation, instead of slurping all the data in at once.

Want More JayData And Kendo UI?

If we've peeked your interest on using JayData with Kendo UI to manage you local and remote data dialects, we have quite a few demos and samples for you to get your hands on today! These include complex operations like cascading dropdown lists and even a mobile application sample! Yeah. We've got you covered.

About the Author
Peter Zentai is an architect and developer evangelist at JayStack.com, creator of JayData. Peter has an eternal passion toward data driven apps and domain driven architecture - as he thinks it is the information schema that should work for you and not the other way around (at least this is what he says when asked about what he is doing at the moment). His current hobby (and also lifestyle) is to bring the OData protocol to JavaScript - in both the client and server scenarios.


Introducing the Kendo UI Labs

$
0
0
Last week in our Spring 2013 launch event, we announced a brand new website for open-source Kendo UI extensions and integrations. In this post, I’ll share a bit more about the site...

What's The Point Of Promises?

$
0
0

By this point in time, just about every JavaScript developer and their grandma has heard about Promises. If you haven't, then you're about to. The idea for promises is laid out in the Promises/A spec by the folks in the CommonJS group. Promises are generally used as a means for managing callbacks for asynchronous operations, but due to their design, they are way more useful than that. In fact, due to their many uses, I've had numerous people tell me - after I've written about promises - that I've "missed the point of promises". So what is the point of promises?

A Bit About Promises

Before I get into "the point" of promises, I think I should give you a bit of insight into how they work. A promise is an object that - according to the Promises/A spec - requires only one method: then. The then method takes up to three arguments: a success callback, a failure callback, and a progress callback (the spec does not require implementations to include the progress feature, but many do). A new promise object is returned from each call to then.

A promise can be in one of three states: unfulfilled, fulfilled, or failed. A promise will start out as unfulfilled and if it succeeds, will become fulfilled, or if it failed, will become failed. When a promise moves to the fulfilled state, all success callbacks registered to it will be called and will be passed the value that it succeeded with. In addition, any success callbacks that are registered to the promise after it has already become fulfilled will be called immediately.

The same thing happens when the promise moves to the failed state, except that it calls the failure callbacks rather than the success callbacks. For the implementations that include the progress feature, a promise can be updated on its progress at any time before it leaves the unfulfilled state. When the progress is updated, all of the progress callbacks will be immediately invoked and passed the progress value. Progress callbacks are handled differently than the success and failure callbacks; If you register a progress callback after a progress update has already happened, the new progress callback will only be called for progress updates that occur after it was registered.

We won't get into how the promise states are managed because that isn't in the spec and it differs per implementation. In later examples you'll see how it's done, but for right now this is all you need to know.

Handling Callbacks

Handling callbacks for asynchronous operations, as previously mentioned, is the most basic and most common use of promises. Let's compare a standard callback to using a promise.

Callbacks

// Normal callback usage
asyncOperation(function() {
    // Here's your callback
});

// Now `asyncOperation` returns a promise
asyncOperation().then(function(){
    // Here's your callback
});

Just by looking at this example, I doubt anyone would really care to use promises. There seems to be no benefit, except maybe that "then" makes it more obvious that the callback function is called afterasyncOperation has completed. Even with this benefit, though, we now have more code (abstractions should make our code shorter, shouldn't they?) and the promise will be slightly less performant than the standard callback.

Don't let this deter you, though. If this was the best that promises could do, this article wouldn't even exist.

Pyramid of Doom

Many articles that you find on the web will cite what is called the "Pyramid of Doom" as their primary reason for using promises. This is referring to needing to perform multiple asynchronous operations in succession. With normal callbacks, we'd end up nesting calls within each other; with this nesting comes more indentation, creating a pyramid (pointing to the right), hence the name "Pyramid of Doom". If you only need to perform a couple asynchronous operations in succession, then this isn't too bad, but once you need to do 3 or more, it becomes difficult to read, especially when there's a decent amount of processing that needs to be done at each step. Using promises can help the code flatten out and become more readable again. Take a look.

Pyramid Of Doom

// Normal callback usage => PYRAMID OF DOOOOOOOOM
asyncOperation(function(data){
    // Do some processing with `data`
    anotherAsync(function(data2){
        // Some more processing with `data2`
        yetAnotherAsync(function(){
            // Yay we're finished!
        });
    });
});

// Let's look at using promises
asyncOperation()
.then(function(data){
    // Do some processing with `data`
    return anotherAsync();
})
.then(function(data2){
    // Some more processing with `data2`
    return yetAnotherAsync();
})
.then(function(){
    // Yay we're finished!
});

As you can see, the use of promises makes things much flatter and more readable. This works, because - as mentioned earlier - then returns a promise, so you can chain then calls all day long. The promise returned by then is fulfulled with the value that is returned from the callback. If the callback returns a promise (as is the case in this example), the promise that then returns is fulfilled with the same value that the promise your callback returned is fullfilled with. That's a mouthful, so I'll walk step-by-step to help you understand.

asyncOperation returns a promise object. So we call then on that promise object and pass it a callback function; then will also return a promise. When the asyncOperation finishes, it'll fulfill the promise with data. The callback is then invoked and data is passed in as an argument to it. If the callback doesn't have a return value, the promise that was returned by then is immediately fulfilled with no value. If the callback returns something other than a promise, then the promise that was returned by then will be immediately fulfilled with that value. If the callback returns a promise (like in the example), then the promise that was returned by then will wait until the promise that was returned by our callback is fulfilled. Once our callback's promise is fulfilled, the value that it was fulfilled with (in this case, data2) will be given to then's promise. Then the promise from then is fulfilled with data2. And so on. It sounds a bit complicated, but it's actually pretty simple. If what I've told you doesn't sink in, I'm sorry. I guess I'm just not the best person to talk about it.

Using Named Callbacks Instead

Apparently promises aren't the only way to flatten out this structure though. After writing a post that mentioned promises solving the Pyramid of Doom problem, a man commented on the post saying...

I think promises are sometimes useful, but the issue of "nested" callbacks (christmas tree syndrome) can be trivially solved by just passing a named function as an argument instead of anonymous functions:

asyncCall( param1, param2, HandlerCallback );

function HandlerCallback(err, res){
// do stuff
}

His example only gave an example of going one level deep, but it is still true. Let's expand my previous example to make this easier to see.

Named Callbacks

// Normal callback usage => PYRAMID OF DOOOOOOOOM
asyncOperation(handler1);

function handler1(data) {
    // Do some processing with `data`
    anotherAsync(handler2);
}

function handler2(data2) {
    // Some more processing with `data2`
    yetAnotherAsync(handler3);
}

function handler3() {
    // Yay we're finished!
}

Would you look at that! He's absolutely right! It did flatten out the stucture. But there's a problem here that also existed in the old callback example that I didn't mention yet: dependency and reusability. Dependency and reusability are closely tied to one another in an inverse fashion. The fewer dependencies something has, the more reusable it can become. In the above example, handler1 depends on handler2 and handler2 depends on handler3. This means that handler1 can't be reused for any purpose unless handler2 is also present. What's the point of naming your functions if you're not going to reuse them?

The worst part is that handler1 doesn't care at all what happens in handler2. It doesn't need handler2 at all except to make anotherAsync work. So, let's remove those dependencies and make the functions more reusable by using promises.

Chained Callbacks

asyncOperation().then(handler1).then(handler2).then(handler3);

function handler1(data) {
    // Do some processing with `data`
    return anotherAsync();
}

function handler2(data2) {
    // Some more processing with `data2`
    return yetAnotherAsync();
}

function handler3() {
    // Yay we're finished!
}

Isn't this so much better? Now handler1 and handler2 couldn't care less if any other functions existed. Wanna see something really great? Now handler1 can be used in situations that don't need handler2. Instead, after handler1 is done, we'll use anotherHandler.

Reusing Functions

asyncOperation().then(handler1).then(anotherHandler);

function handler1(data) {
    // Do some processing with `data`
    return anotherAsync();
}

function anotherHandler(data2) {
    // Do some really awesome stuff that you've never seen before. It'll impress you
}

Now that handler1 is decoupled from handler2 it becomes able to be used in more situations, specifically ones where we don't want the functionality that handler2 provides. This is reusability! The only thing that the commenter's "solution" did was to make the code more readable by eliminating the deep indentation. We don't want to eliminate indentation merely for the sake of eliminating indentation. The numerous levels of indentation is merely a sign that something is wrong, it is not the problem itself. It's like a headache that is caused by dehydration. The real problem is the dehydration, not the headache. The solution is to get hydrated, not to take some pain medicine.

Parallel Asynchronous Operations

In the article that I mentioned earlier, I had compared promises to events for handling asynchronous operations. Sadly, I didn't do a very good comparison, as noted by a couple people in the comments. I showed the strengths of promises, then moved onto events where I showed their strengths as they were being used in my particular project. There was no comparison and contrasting. One commenter wrote this (with a few grammar fixes):

I think using the example in the post is a bad comparison. An easy demonstration of the value of promises would be if pressing the imaginary "Start Server Button" had to start not only a web-server but also a database server and then update the UI only when they both were running.

Using the .when method of promises would make this "multiple-asynchronous-operations" example trivial, whereas reacting to multiple asynchronous events requires a non-trivial amount of code.

He was absolutely right. I didn't actually compare the two. The point of the article was actually to show that promises aren't the only mechanism for asynchronous operations and in some situations, they aren't necessarily the best either. In the situation that the commenter pointed out, promises would definitely be the best solution. Let's take a look at what he's talking about.

jQuery has a method named when that will take an arbitrary number of promise arguments and return a single promise. If any of the promises passed in fails, then the promise that when returned will fail. If all of the promises are fulfilled, then each of their values will be passed to the attached callbacks in the order that the promises were specified.

It's very useful for doing numerous asyncronous operations in parallel and then only moving on to the callback after every one of them is finished. Let's take a look at a simple example.

jQuery.when

// Each of these async functions return a promise
var promise1 = asyncOperation1();
var promise2 = asyncOperation2();
var promise3 = asyncOperation3();

// The $ refers to jQuery
$.when(promise1, promise2, promise3).then(
    function(value1, value2, value3){
        // Do something with each of the returned values
    }
);

This is often referred to as one of the best things about promises, and is partially the point of promises. I agree that it's a wonderful feature that greatly simplifies some things, but this mechanism for the when method isn't even described in any of the Promises specs, so I doubt it's the point of promises. There is a spec that details a method named when, but it is entirely different. jQuery is the only library that I can find that contains this version of the when method. Most promises libraries, such as Q, Dojo, and when implement the when method according to the Promises/B spec, but don't implement the when method that is being talked about by the commenter. However, the Q library uses a method named all and when.js has a method named parallel that works the same way except that they take an array, rather than an arbitrary number of arguments.

Value Representation

Another commenter left me this note:

A Promise is a better way to handle the following scenario:

"I want to find an user in this database, but the find method is asynchronous."

So, here we have a find method which doesn't immediately return a value. But it does eventually "return" a value (by the means of a callback), and you want to handle that value in some way. Now, by using callback you can define a continuation, or "some code that will deal with that value at a later time."

Promises change that "hey, here's some code to deal with the value you'll find". They are something that allow the "find" method to say "hey, I'll be busy finding you the information you asked for, but on the mean time you can hang onto this Thing that represents that value, and you can handle it in any way you want in the mean time, just like the actual thing!"

Promises represent real values. That's the catch. And they work when you can deal with the Promises in the same way you'd do with the actual thing. That Promise implementations in JavaScript expect you to pass a callback function to it is just an "accident", it's not the important thing.

I believe this really is the point of promises. Why? Read the first sentence of the Promises/A spec: "A promise represents the eventual value returned from the single completion of an operation." Makes it kinda obvious doesn't it? Well, even if that is the point, that won't stop me from presenting other people's views later in this article. Anyway, let's talk about this idea a bit.

This concept is wonderful, but how does it flesh itself out in practice? What does using promises as a representation of values look like? First let's take a look at some synchronous code:

Synchronous Code

// Search a collection for a list of "specific" items
var dataArray = dataCollection.find('somethingSpecific');

// Map the array so that we can get just the ID's
var ids = map(dataArray, function(item){
    return item.id;
});

// Display them
display(ids);

Ok, this works great if dataCollection.find is synchronous. But what if it's asynchronous? I mean look at this code; it's totally written in a synchronous way. There's no way this could work exactly the same if find is asynchronous, right? WRONG. We just need to change map and display to accept promises as arguments to represent the values they'll be working on. Also, find and map need to return promises. So let's assume that dataCollection doesn't actually contain data: it just makes AJAX calls to retrieve the data. So now it'll return a promise. Now, let's rewrite map and display to accept promises, but we'll give them different names: pmap and pdisplay.

Accept Promises As Parameters

function pmap (dataPromise, fn) {
    // Always assume `dataPromise` is a promise... cuts down on the code
    // `then` returns a promise, so let's use that instead of creating our own
    return dataPromise.then(
        // Success callback
        function(data) {
            // Fulfill our promise with the data returned from the normal synchronous `map`
            return map(data, fn);
        }
        // Pass errors through by not including a failure callback
    );
}

function pdisplay(dataPromise) {
    dataPromise.then(
        // Success callback
        function(data) {
            // When we have the data, just send it to the normal synchronous `display`
            display(data);
        },
        // Failure callback
        function(err) {
            // If it fails, we'll just display the error
            display(err);
        }
    );
}

That wasn't too difficult, was it? Now let's rewrite the example from above using these new functions:

Asynchronous Code

// Search a collection for a list of "specific" items
var dataArray = dataCollection.find('somethingSpecific');

// Map the array so that we can get just the ID's
var ids = pmap(dataArray, function(item){
    return item.id;
});

// Display them
pdisplay(ids);

All I had to do was change the names of the functions to the new ones. If you're ambitious, you could write the functions with the same name, and have them able to accept either a promise or a normal value and react accordingly. In this new code, the promises are used to represent the eventual value that they'll return, so we can write the code in a way that makes it look like the promises are the real values.

I lied a little bit. Above I said "this could work exactly the same", but there is a catch. Other than the name changes for some of the functions, there is something else here that is different: any code that appears after the call to pdisplay will probably be called before the actual displaying has happened. So you either need to make that the code that comes after it doesn't depend on the displaying being finished, or you need to return a promise from pdisplay and have the rest of the code run after that promise is fulfilled. Part of the reason I didn't make pdisplay return a promise in my example is that it didn't have a return value, so the promise wouldn't be used to represent a value, which is what we're talking about in this section.

In any case, you can see how making your functions accept promises, rather than just normal values, can make your code look much cleaner and much closer to working like synchronous code. That's one of the beauties of promises. There's another post about this concept, from a slightly different perspective on Jedi Toolkit's blog.

Using Internally For Fluent APIs

One person who commented on my article said this:

I think those of us writing promises implementations are doing a bad job at explaining what promises are for. My opinion is that you as a user should never have to interact with promises using then(). then() should a way for promise-consuming libraries to interact with each other and to provide fluent APIs. You should still use callbacks and events as usual.

His comment matches pretty nicely with the previous section about using promises to represent values. By using promises to represent values, we can create simpler APIs, as seen above, but we're talking about a chaining API here. Basically, this commenter is telling us to use callbacks at the end of a chain of asynchronous commands, so we're still doing what we're used to doing (using callbacks in the end) and no one can tell that we're using promises. He wants to keep promises away from the normal users and just use them internally within our own libraries. Take a look at this example that queries a database and filters the results more and more asynchronously:

Chained .then Calls

database.find()
.then(function(results1) {
    return results1.filter('name', 'Joe');
})
.then(function(results2) {
    return results2.limit(5);
})
.then(function(resultsForReal) {
    // Do stuff with the results
});

For whatever reason, filter and limit are actually asynchronous. You probably wouldn't think they would be, but that's how this works. Well, the commenter is suggesting that the API be changed so that the user can use it like this instead:

Fluent API Example

database.find().filter('name', 'Joe').limit(5, function(results){
    // Do stuff with the results
});

That seems to make more sense to me. If you have the brains to make this work in your libraries, then this is the route you should take. You could change it up a bit and instead of using a normal callback, you could still return a promise:

Returning A Promise

var results = database.find().filter('name', 'Joe').limit(5);

results.then(function(realResults){
    // Do stuff with the results
});

// OR use results like in the previous section:
doStuffWith(results);

The choice is up to you here. I think enough developers understand promises that there's nothing wrong with returning promises to your users, but it's really a matter of opinion. Either way it's better than the original situation where we needed to chain then calls.

Error Containment for Synchronous Parallelism

There's a pretty-well-known article titled You're Missing the Point of Promises, which falls in line with the topic of this post. In that article Domenic Denicola (nice alliterative name) points us to the part of the Promises/A spec that talks about the then function.

then(fulfilledHandler, errorHandler, progressHandler)

Adds a fulfilledHandler, errorHandler, and progressHandler to be called for completion of a promise. The fulfilledHandler is called when the promise is fulfilled. The errorHandler is called when a promise fails. The progressHandler is called for progress events. All arguments are optional and non-function values are ignored. The progressHandler is not only an optional argument, but progress events are purely optional. Promise implementors are not required to ever call a progressHandler (the progressHandler may be ignored), this parameter exists so that implementors may call it if they have progress events to report.

This function should return a new promise that is fulfilled when the given fulfilledHandler or errorHandler callback is finished. This allows promise operations to be chained together. The value returned from the callback handler is the fulfillment value for the returned promise. If the callback throws an error, the returned promise will be moved to failed state.

In his article, he points use to that last paragraph, which he refers to as "the most important" paragraph. He says this:

The thing is, promises are not about callback aggregation. That's a simple utility. Promises are about something much deeper, namely providing a direct correspondence between synchronous functions and asynchronous functions.

I completely agree with this observation. He then he goes on to focus very specifically on the final sentence: "If the callback throws an error, the returned promise will be moved to failed state." The main reason he focuses on this sentence is because jQuery's implementation, which supposedly implements the Promises/A spec, doesn't do this. If an error is thrown in a callback, it will go uncaught by the promises implementation.

This is very important for many people, though I haven't yet had a situation that it was a real issue since I don't throw errors very often. The promises equivalant to an error or exception is a failed promise, so if an error occurs, there should be a failure rather than an actual error being thrown. This way we can continue to use promises to parallel synchronous operations. I think we should all go report this bug to jQuery. The more people who report it, the more likely it'll get fixed soon. jQuery's promises are one of the most often used implementations, so we need to make sure that they do it correctly.

Conclusion

DANG! This is definitely the longest post I've ever written. I was expecting it to be half this long! Anyway, the point of promises is to represent the eventual resulting value from an operation, but the reason to use them is to better parallel synchronous operations. Ever since asynchronous programming hit the scene, callbacks have been popping up all over the place, obscuring our code in strange ways. Promises are a means of changing that. Promises allow us the synchronous way of writing code while giving us asynchronous execution of code.

About the Author
Joe Zimmerman has been doing web development ever since he found an HTML book on his dad's shelf when he was 12. Since then, JavaScript has grown in popularity and he has become passionate about it. He also loves to teach others though his blog and other popular blogs. When he's not writing code, he's spending time with his wife and children and leading them in God's Word. You can follow him @joezimjs.

New Kendo UI Widget: The Tooltip

$
0
0

if you joined us for the Q1 2013 Release Keynote, you may remember Todd mentioning
that this release has the most new widgets since the original release of Kendo UI. One of those new kids, is the Tooltip widget.

Balloon Help

Tooltips are generally identified as a popup that contains more information for a given element on the screen. I've done a cursory background check on this tooltip fella, and it looks like the tooltip first appeared under the name "balloon help" as part of Apple's System 7.0 (AKA OS 7). The intention was originally to display help in balloons sort of like in a comic strip.

They were later made ubiquitous by Microsoft as an integral part of the Office 95 Toolbars. This is where the "balloon help" became the "Tooltip". The rest, as they say, is history.

The Tooltip Today

In modern web applications, the Tooltip is an extremely powerful way of embedding extra information or content into an element of reduced size. The Kendo UI Tooltip displays a popup hint for a given HTML element with a callout. It can be used to show plain text, HTML, and even dynamic content that is loaded in when the tooltip is opened.

Many widgets, such as the Kendo UI Slider, already have a tooltip built in.  We've now promoted that Tooltip to a proper widget that you can use on your own elements.

Creating a basic Kendo UI Tooltip is easy. Simply select which item you want to have a tooltip, and then set the content you want displayed.

A Simple Tooltip

<img id="clippy" src="img/clippy.jpeg"></img><script>
  $("#clippy").kendoTooltip({ "Have You Missed Me?!?" });</script>

Interactive Sample

By default, the Tooltip shows at the bottom. You can move it around by specifying it's position property. Your choices are...

  • top
  • bottom
  • left
  • right
  • center

Positioning the Tooltip

<img id="clippy" src="http://www.kendoui.com/images/default-source/blog-images/clippyECE846D0DF26.png"><script>
  (function ($) {

    $("#clippy").kendoTooltip({ 
      content: "Do you miss me?!?",
      position: "top"  
    });

  }(jQuery));
</script>

Note that in my tests, the tooltip won't show up at the top if there isn't room for it. It's smart enough to know that and stay where it is at the bottom.

You can also show HTML in the tooltip. You can inline the HTML in the content attribute, but it's cleaner to use a template.

Using A Template To Display HTML In A Tooltip

<img id="clippy" src="http://www.kendoui.com/images/default-source/blog-images/clippyECE846D0DF26.png"><script type="text/x-kendo-template" id="template"><h3>Hey, it's my dog!</h3><img src="http://www.kendoui.com/images/default-source/blog-images/dog.jpg" ></script><script>

  (function ($) {
    // render the template as the content of the tooltip    
    $("#clippy").kendoTooltip({ 
      content: kendo.template($("#template").html()),
      position: "right"  
    });

  }(jQuery));

</script>

Interactive Sample

Animation

You can also control the animation of the tooltip. You can specify what animation to use and how long the animation will last. These settings can be applied to both the opening and closing animation.

Tooltip Animation

<img id="clippy" src="http://www.kendoui.com/images/default-source/blog-images/clippyECE846D0DF26.png"><script type="text/x-kendo-template" id="template"><h3>Hey, it's my dog!</h3><img src="http://www.kendoui.com/images/default-source/blog-images/dog.jpg" ></script><script>

  (function ($) {

    $("#clippy").kendoTooltip({ 
      content: kendo.template($("#template").html()),
      position: "right",
      // make the tooltip slide out from clippy
      animation: {
        open: {
          effects: "slideIn:right",
          duration: 200
        },
        // passing reverse: true reverses the slideIn animation
        close: {
          effects: "slideIn:right",
          reverse: true,
          duration: 200
        }
      }
    });

  }(jQuery));

</script>

Interactive Sample

Remember that when you are using animations, refer to the fx documentation. Passing a reverse: true will reverse a desired animation. Not all of the animations are applicable to the tooltip. It currently supports fade, slideIn, expand and zoom.

You can also chain animations for neat effects. I have put together a list with each of these effects in action. The last one is a combo effect for a really neat animation that sort of sweeps in on a curve.

 

Loading External Content

You can load in external content to the tooltip simply by passing a url property to the content object.

External Content

<img id="clippy" src="http://www.kendoui.com/images/default-source/blog-images/clippyECE846D0DF26.png"><script>

  (function ($) {

    $("#clippy").kendoTooltip({ 
      content: {
        url: "http://en.wikipedia.org/wiki/Office_Assistant"
      },
      width: 500,
      height: 500,
      position: "right",
    });

  }(jQuery));

</script>

Interactive Sample

When you do load external content, you might want to do it dynamically depending on what element you just moused over. In that case, the requestStart function is provided where you can send in some parameters to pass with the AJAX request.

Lots Of Tooltips

Creating a Tooltip for a single item is easy, but what about when you have an entire list of items that you want to display tooltip's for? The Kendo UI Tooltip has a special property called filter that handles just this scenario.

You initialize the tooltip on the container of items, then specify via the filter which items you want to apply the tooltip to.

The template that you use for each item has access to the target object. This object is a jQuery object that represents the item that activated the tooltip. That means that if you want to pass any information to your tooltip, it's easy to do it by adding a data attribute that you can then read off of the target.

That's a little confusing, so as Linus Torvalds says, "Talk is cheap. Show me the code."

A Kendo UI ListView With Tooltips

In this demo, I've hooked into the Google YouTube API (as I've done before), and I'm returning results for a search term. When you mouse over the smallish film icon, you get the counts for likes, comments and views.


I store this information in
data- attributes on the img element which is where the tooltip is attached.

List Item Template

<img height="90" width="120" src="#: thumbnail.sqDefault #" alt="thumbnail" data-comments="#: commentCount #" data-likes="#: likeCount #" data-views="#: viewCount #"/>    

When the tooltip is activated, the target object is accessible and is the element that triggered the tooltip wrapped as a jQuery object.

Tooltip Template

<script type="text/x-kendo-template" id="tooltip-template"><div style="text-align: left"
  <p>
    <i class="icon-thumbs-up"></i>: #: target.data("likes") #
    </p>
    <p>
    <i class="icon-comment"></i>: #: target.data("comments") #
    </p>
    <p>
    <i class="icon-eye-open"></i>: #: target.data("views") #
    </p>
  </div></script>

An important thing to note here is that the items in the ListView are loaded via ajax. This means that at the time the Tooltip is attached, they DO NOT YET EXIST. This is great because it means that you can add items to a list and if there is a tooltip attached to the container, the items will automatically get the tooltip. That's called delegation and it rocks socks.

Get More Tooltip

If you've had enough of Clippy, but you still can't get enough Tooltip, make sure to grab Kendo UI, then head over to the Tooltip Demos and of course the Tooltip Docs.  This is a widget that I am very excited about as Tooltips are such a quick and easy way to get some extra info to your user without having to occupy precious space in your UI.

About the Author
is a web developer living in Nashville, TN. He enjoys working with and meeting developers who are building mobile apps with jQuery / HTML5 and loves to hack on social API's. Burke works for Telerik as a Developer Evangelist focusing on Kendo UI. Burke is @burkeholland on Twitter.

Hijacking .toJSON For Fun And Profit

$
0
0

In my previous post on JSON, I covered the basics of what a JSON document is, and showed a few very simple things that can be done to produce them. In this post, I want to expand on one of those methods, toJSON, and show how to produce a custom JSON document for any JavaScript object, no matter what framework is serializing the object.

toJSON

As a quick review of toJSON, open your favorite JavaScript console and type (copy & paste) this:

toJSON Console Example

var obj = { foo: "bar", baz: "quux", toJSON: function(){ return {a: "b"}}};
JSON.stringify(obj);

The result is an object that has "foo" and "bar" attributes, but produces a JSON document of {"a": "b"}.

The toJSON method is part of the JSON specification. It provides opportunity to override the default serialization process, and return the JavaScript object that should be serialized instead of the original object.

This might not seem terribly useful off-hand, but it does have some real value in application development.

A Registration Form

For this example, I'm going to borrow code from the registration form example for the MVVM framework.

The example shown on that demo page doesn't submit anything to a server API. It is fairly simple to add this capability, though. I can set up a DataSource to create a record on the server and then use the "register" button click to save the record.

A Registration Form

// define a DataSource to handle creating a registration entry
var ds = new kendo.data.DataSource({
  transport: {
    create: {
      url: "/api/registration",
      dataType: "json",
      type: "POST"
    },
    // post the data as JSON instead of raw form post
    parameterMap: function(options, operation){
      return kendo.stringify(options)
    }
  },
  autoSync: false,
  schema: {
    model: {
      id: "RegistrationID"
    }
  }
});

// Set up the view model to run the form
var viewModel = kendo.observable({
  firstName: "John",
  lastName: "Doe",
  genders: ["Male", "Female"],
  gender: "Male",
  agreed: false,
  confirmed: false,

  register: function(e) {
    e.preventDefault();

    // when we click the "register" button, sync
    // the registration back to the server
    ds.sync();
    
    this.set("confirmed", true);
  },

  startOver: function() {
    this.set("confirmed", false);
    this.set("agreed", false);
    this.set("gender", "Male");
    this.set("firstName", "John");
    this.set("lastName", "Doe");
  }
});

ds.add(viewModel);

kendo.bind($("form"), viewModel);

When I run this and click the "Register" button, it will attempt to POST this JSON document to my server:

toJSON Output Of The Registration

{
  "firstName":"John",
  "lastName":"Doe",
  "genders":["Male","Female"],
  "gender":"Male",
  "agreed":true,
  "confirmed":false,
  "dirty":false,
  "id":""
}

This document contains all of the information that I need from the registration form, but it also contains information that my API does not need. For example, there is no need to send back a list of "genders", or the "dirty" flag. My API does not use these fields, and depending on the server I'm using, this can cause problems.

Hijacking .toJSON

To filter out the unwanted data and prevent the server from receiving more information than it needs, I can override the.toJSON method on my Observable object.

Overriding toJSON

// Set up the view model to run the form
var viewModel = kendo.observable({

  // ... existing code goes here

  // hijack the toJSON method and overwrite the
  // data that is sent back to the server
  toJSON: function(){
    return {
      a: "B",
      c: "d",
      foo: "BAR"
    }
  }
}

Now when I click the "Register" button, my server is sent the following JSON document:

{"a":"B","c":"d","foo":"BAR"}

Of course sending back junk data isn't exactly what I want. A better idea would be to have the toJSON method serialize all of the data that I need to send back, and ignore the extra information.

A Better toJSON Method

// Set up the view model to run the form
var viewModel = kendo.observable({

  // ... existing code goes here

  // hijack the toJSON method and overwrite the
  // data that is sent back to the server
  toJSON: function(){
    return {
      firstName: this.firstName,
      lastName: this.lastName,
      gender: this.gender,
      agreed: this.agreed
    };
  }

}

When I click the "Register" button, now, I see this sent to the server:

The Right JSON Document

{
  "firstName":"John",
  "lastName":"Doe",
  "gender":"Male",
  "agreed":true
}

This version of the JSON document only supplies the values that my API needs, and nothing else.

Additive vs subtractive

Supplying a custom .toJSON method is definitely useful, but it can also be rather tedious. If I have a very large form - 30 or 40 fields for example - then it would be very time consuming and require a lot of code and maintenance to write the custom method the way that I've shown above. There is an alternative, though, which will make my life much easier in some cases.

The above example is an additive version of a toJSON method. Every time I need a new field in the JSON document, I need to add it to the method manually. The alternative to this, is a subtractive toJSON method. In this version, I only need to remove the fields that are not needed and allow all other fields to pass through.

Removing Fields

// Set up the view model to run the form
var viewModel = kendo.observable({

  // ... existing code goes here

  // hijack the toJSON method and overwrite the
  // data that is sent back to the server
  toJSON: function(){

    // call the original toJSON method from the observable prototype
    var json = kendo.data.ObservableObject.prototype.toJSON.call(this);

    // remove the fields that do not need to be sent back
    delete json.genders;
    delete json.confirmed;
    delete json.dirty;
    delete json.id;
  }
}

I'm doing 2 very different things in this case. First, I'm using JavaScript's prototypes to call the original version of the toJSON method.

If you're coming from another object-oriented language like C#, Java, or Ruby, you can think of this line:

var json = kendo.ObservableObject.prototype.toJSON.call(this);

as the equivalent of a call to super or base. It reaches back to the original method of the defining type, and calls it in the context of the current object instance.

If this were C#, it would look like this:

var json = base.toJSON();

Although some browsers provide a short syntax to reach an object's base methods, closer to what C# provides, the prototype code above is compatible across all browsers and is recommended at this point in time.

The second difference is that instead of adding fields to the object, I'm removing them. The JavaScript delete keyword will remove a specified attribute from a specified object. Note that this is not the same as deleting a record from a database. The delete keyword does not cause any network calls or other code to be executed. It only removes an attribute from an object.

Since delete is a keyword in JavaScript, most frameworks opt for the name destroy when creating a method that will remove an object from a data store.

By using a subtractive form of a toJSON method, I can potentially reduce the number of fields that need to be managed. I'm not limited to either additive or subtractive implementations, though. These can be combined in to a more complex and robust implementation that both adds fields and removes them as needed.

One For All, All For One

Overriding a .toJSON method is an easy way to ensure the server API is only getting the data is needs. It provides a simple entry point for any framework to serialize a JavaScript object in to a JSON document. And while the samples that I've shown in this blog post are centered around Kendo UI's MVVM framework, this is a standard that all modern browsers and frameworks implement and know how to work with. Having the .toJSON method in your tool-belt will allow you to customize nearly any object for nearly any modern JavaScript framework, including jQuery, Knockout, Backbone, Ember, Angular and more.

About the Author
is a Developer Advocate for Kendo UI, a developer, speaker, trainer, screen-caster and much more. He's been slinging code since the late 80’s and doing it professionally since the mid 90's. These days, Derick spends his time primarily writing javascript with back-end languages of all types, including Ruby, NodeJS, .NET and more. Derick blogs at DerickBailey.LosTechies.com, produces screencasts at WatchMeCode.net, tweets as @derickbailey and provides support and assistance for JavaScript, BackboneJS, MarionetteJS and much more around the web.

A Facebook Style MultiSelect

$
0
0
This latest release of Kendo UI includes another very common UI component made famous by Facebook: The Multi Select. At it's most basic level, the MultiSelect is an enhancement on the standard HTML <select> element. It allows the user to pick multiple items, each of which are represented by an autonomous element within the select.

The KendoUI-Backbone Integration Project And Repository

$
0
0
Derick Bailey is working on better support for Kendo UI and Backbone.js integration, and wants you to help! Come see how to get started, what the integration project is currently able to do, and where it might go from here.

Styling Forms Like A Pro With Kendo UI

$
0
0
If you are like me, then nothing makes you feel like more of a failure in life than CSS. There was a GIF being circulated a few weeks ago that sums up how I feel about CSS so poetically. I'm not saying that there is anything wrong with CSS as a specification. There are many people who can do unbelievable things with it. For me, it's just never clicked. I've been writing CSS for a long time and studying it for a long time as well. It just doesn't seem to be getting any better. This seems to be especially true with forms. As a web developer, I tend to create a lot of forms and it's always a struggle.

A Day At The SPA With kendo.View

$
0
0
Oh, the luxuries of life at the spa. Sitting in a quiet room with low light, gentle music, and being pampered by attandees. It's the spa life for me.

Sublime Studio: Replicating Sublime Text 2 Features in Visual Studio

$
0
0
Being comfortable in your IDE is paramount to being a productive developer. I made the switch from Sublime Text 2 to Visual Studio. I had to change a few things in VS to get it to act like Sublime Text 2, but I pulled it off. I call it Sublime Studio.

Announcing the Kendo UI Q2 2013 Roadmap!

$
0
0
It’s officially springtime, ladies and gentlemen, and you know what that means: lots of sunshine, the world is green again, and the Kendo UI team has emerged from our secret, maximum security lair to share another roadmap update!

HTML5 and Test Studio

$
0
0
Rich user experiences with HTML5 applications can cause some difficulties for teams trying to get solid user interface (UI) testing around their apps. Interacting with video or audio elements is often troublesome, and the manipulations of a web page’s document object model (DOM) on the fly via asynchronous alterations causes automation frameworks (and automaters!) endless grief in the attempt to get subtle timing issues stabilized.

Making Kendo UI Binders for Complex Types

$
0
0
Custom MVVM binders are a powerful part of the Kendo UI MVVM framework. Making basic custom binders isn’t too difficult.
Viewing all 181 articles
Browse latest View live