DailyJS: What makes animation exciting in 2015—compared to the older Web 2.0 animation libraries?
Julian: We have faster browsers today, and they're only continuing to get faster at an impressive rate. The Chrome team, in particular, also happens to be aggressively pushing to standardize browser behavior on Android. Those two factors, combined with the increasing computing power of mobile devices, means that Web animation performance will continue to improve -- even if developers' implementations stay stuck in old ways.
The other predominant trend in web development performance is simply its broader awareness, along with the increased importance being placed on it. Fortunately, performance is no longer a second-class citizen in web development education and industry news outlets. With broader awareness comes broader implementation and deeper understanding of best practices. Since there is a lot of low-hanging fruit that can be picked in web performance, this too will result in significant gains in site performance and usability in the coming months.
All of this leads to a performant base web development platform upon which beautiful and fluid animations can be layered. The higher we can push animations to perform -- across all browsers and devices -- the sooner the slick motion design commonly found in mobile apps will exist without compromise on the Web.
DailyJS: What trends will we see in the future of Web animation?
Julian:We might see a wave of too-clever and overused motion design in web user interfaces. Are you familiar with those awesome-yet-crazy After Effects-produced motion design examples on Dribbble? Those will become a technical reality on web pages soon enough. Like all things, however, I think the wave will peak then collapse, and we'll settle into a happy medium of quality user experiences bolstered by motion design that doesn't detract from usability.
In terms of trends, the inevitable growth of React Native might mean we'll also see a large shift in the tooling landscape toward the React ecosystem. I wouldn't be shocked if React is the future of web programming as we know it. If this turns out to be the case, animation engines will have to follow suit or they'll otherwise become irrelevant.
DailyJS: What animation anti-patterns should developers and designers be aware of?
Julian: Utility and elegance are the goals of all great Web animation. Great animators deliberately choose every aspect (e.g. easing, target properties, duration, and so on) of their seemingly arbitrary animation sequences -- because they want to reinforce the intentions of a UI, and that's a very calculated decision. Whimsical motion design, in contrast, is not only inconsistent, but it also appears inelegant and diverting to the user.
The sad truth is that -- while there are hundreds of tutorials on the minutiae of UI design -- there is very little education on motion design. They're both very important aspects: Whereas UI design lays the structural foundation for interacting with a web page, motion design enriches that foundation with the furnishing and decoration that make the page usable and comfortable. If you allow me to be metaphorical for a moment, furnishing is the utility that motion design serves, and decoration is the elegance it provides. Great apps leverage both utility and elegance to make the user feel like they're interacting with an interface that's living, breathing, and tangible. In contrast, an interface that's devoid of motion design reminds the user that she's simply dragging a cursor across a screen or tapping her finger on a piece of glass. In other words, a web page without motion design can make the user painfully aware of the artifice before her.
With that context out of the way, common anti-patterns include:
Employing one-off motion design interactions that break from convention: The less you copy motion design from existing popular apps and sites, the less familiar your interface will feel to the user, and the less confidence she'll have when using it. While there's utility in novelty, the motion design of everyday UI elements shouldn't be novel. It should be reliably obvious.
Allowing complex animation sequences to consume a large total duration: Developers often make the mistake of letting animations run too long, causing the user to wait needlessly. UI flourishes should never slow down the apparent speed of a page. If you have a lot of content fading into view within a larger animation sequence, ensure that both the individual animation steps and the total animation sequence duration are kept short. Don't lose track of the bigger UX picture. Since it's difficult to judge the appropriateness of your animation durations after seeing them play out dozens of times during development, a good rule of thumb is to speed up all animations by 25 percent before you push a site to production.
Being frivolous: Interface developers should regularly be downloading the most popular apps, playing with them extensively, and judging whether they feature animation to a greater or lesser extent than their own work does. If they feel that the apps use animation to a much lesser extent, they need to consider toning back the motion design in their UI. Simply, if a particular piece of motion design doesn't provide psychological utility to the overall experience (affordance, clarity, connection, etc.), then its inclusion should be reconsidered. Injecting motion design for its own sake leads to bloated interfaces.
Not experimenting: Finding the right duration, stagger, easing, and property combinations for each animation is not a skill that designers are simply born with. It's a skill that every great developer-designer has had to hone. So, remember: your first attempt at a combination of animation properties might look good, but it's probably not the best combination. You should experiment by systematically changing each factor in the motion design equation into you stumble into something sublime. Once you've found a combination you love, experiment even further. Consider cutting the duration in half, switching to a completely different easing type, or swapping out a property. Designers are often averse to repeated experimentation because they believe in the sanctity of their whimsically-derived creative nuggets of insight. But insights have relative value, so it's important to get outside of your comfort zone to see what else you can come up with. Professional developers should strive for greatness, not goodness. Greatness always entails experimentation.
I've written a lot about these theoretical aspects of motion design in my book, which also dives into the topics of animation performance and animation workflow.
Both Hapi and Express rate extremely well against our juding criteria. To choose between the two, it pretty much came down to the framework architecture: Hapi's plugin system means that we can isolate different facets and services of the application in ways that would allow for microservices in the future. Express, on the other hand, requires a bit more configuration to get the same functionality (it's certainly capable!).
The current npm-www has a lot of dependencies that you might have seen before: browserify, uglifyjs, moment, ejs, and nodemailer are all popular modules. I think using something like hapi.js or Express makes sense, even if it just gives the project some architectural hints.
While, yes, we could use barebones Node.js and roll our own framework, we want to avoid the same "special snowflake" situation that we're currently in. Plus, by using a framework, we can focus more on pushing out the features our community wants and needs, instead of debugging some weird nook and/or cranny that someone forgot about.
Notes from the Road
Notes from the Road is a post on Node's official blog by TJ Fontaine about the Node on the Road events:
These Node on the Road events are successful because of the incredible support from the community and the existing meetup organizations in their respective cities. But the biggest advantage is that the project gets to solicit feedback directly from our users about what is and isn't working for them in Node.js, what modules they're using, and where they need Node to do better.
From these experiences TJ has written up some notes about Node's release schedule, future documentation improvements, and the path to becoming a Node contributor:
In an effort to make it easier for users to contribute to Node.js the project has decided to lift the requirement of signing the CLA before contributions are eligible for integration. Having to sign the CLA could at times be a stumbling block for a contribution. It could involve a long conversation with your legal department to ultimately contribute typo corrections.
TJ Fontaine Interview
Meanwhile, the Node hosting company Modulus has interviewed TJ Fontaine, where some of these points are reiterated:
If you're looking for ways to contribute to Node itself, the website will be soon be going through an overhaul to improve our documentation. We're going to be adding a lot more documentation, cleaning up what we already have, as well as designing the pieces to help internationalize the site. Node's community is globally diverse, we should be working to enable Node users everywhere they are.
This interview was conducted by [Oleg
Podsechin](http://twitter.com/olegpodsechin) with [Ryan
Dahl](http://tinyclouds.org/) on the 8th of July, shortly after Ryan's
an IT consultancy.
OP: So on the topic of CommonJS, are you following any of the APIs or
any of the discussions on the list?
RD: Yeah, sure
OP: And which ones are you most interested in?
RD: CommonJS has some good specs and some less good specs. Some
specs are rather prescriptive without any implementation - which I find
interface - I just think it’s going to take some time to experiment with
currently lacks a way of dealing with raw binary in any reasonable way.
The module spec is good, the assert spec is good, the others are
OP: What about the package spec?
RD: Oh yeah, it also looks good. I don’t want to work on a package
system, so I’m not following it super closely, but I think that there’s
a lot of good ideas there.
OP: As a user you must use a package management system. Which ones do
RD: I’m playing around with NPM. It’s OK, kind of buggy, but you can
OP: So with regards to packages, obviously there’s some stuff that’s
going into the core of Node, but external packages, like XML parsing,
are there any packages that you think are important that aren’t there
RD: There needs to be a better MySQL solution, libmysql_client, the
library that comes with MySQL is blocking so that is not a solution.
There are other solutions, but they seem kind of buggy. A lot of people
use MySQL and it would be a hindrance for them if they couldn’t access
that easily. That’s one.\
it seems that has solved. I also wanted a DOM implementation and it
seems like that’s been solved too. I would really like a way to access
Cassandra, which uses Thrift - that’s not been done yet.
RD: Thrift is a piece of crap but unfortunately some projects are
using it so we’ve got to interface with it. Some sort of Thrift binding
would be good.
OP: I think in the next release they’re looking to have a RESTful
RD: I’ve heard they’re introducing an interface based on Avro, a new
message serialization RPC thing, but I’m not sure how good the Avro
support is. Avro seems a lot better than Thrift so just binding to Avro
would be the best way go for talking to Casandra - I don’t know.
Being able to connect to databases is important for users. If it’s not
there, then it’s a total roadblock for a lot of people. So, MySQL is a
is non relational databases and Node seems like the perfect glue, if you
will, to connect these different data stores together. CouchDB guys are
using it for that purpose. What are your thoughts, can you see an
RD: Exactly, Node perfectly fills the proxy and authentication
layer, between the storage backend and client. So yeah, I think it’s a
good sort of glue and I agree with the CouchDB philosophy that the bulk
of the application can kind of sit in the database. All the hard stuff
can be back in Couch and Node can just proxy data back and forth.
OP: So talking about the packages you’d like in Node, moving into the
core and looking at the way the project is being built, what are your
thoughts on project leadership in open source projects? What do you
think is the right way to do it, which things shouldn’t you do? What’s
your personal approach? Because you used to post little challenges for
people to get them excited and get them contributing a little bit. Can
you talk more about that?
RD: I have a strong arm in the project. I’m the only committer and I
dictate how things go and I think that’s a good approach for Node at
this stage. At some point, hopefully, Node will grow up and we’ll a
committee that decides on things. But at the moment having somebody
that’s dedicated to the project and who will make sure that any changes
that go in will be maintained is important. Part of that roll is not
accepting changes that I can’t maintain myself, and so it means
rejecting a lot of good code - just because it doesn’t fit into my
contrained idea of what “Node core” is. There are users who would have
contributed, for example, package manager code, but it’s not something I
have time to maintain.
Another part of leading this project is getting people involved by very
explicitly suggesting to people what needs to be done. I’m sending a lot
of I emails to people saying “hey, you should give me a patch on this
thing, that would be very helpful”.
OP: Nudging them a little bit in the right direction ...
OP: So how big of a role has GitHub played in this? And git and the
social coding element of it?
RD: GitHub is great - it’s best feature is the ability to have web
links source code - at a particular commit, with a specific section
highlighted. Linking to source code like that really improves
communication in email and on IRC. That’s probably the best feature of
OP: Issue tracker?
RD: The issue tracker I use, which is OK, but it could be better.
Generally, GitHub could be doing more by hosting a mailing lists. Google
OP: Moving on, with regards to commitment to the project, you’re saying
that you’re fully behind (it and so on and so forth,) so you’re
currently employed by Joyent? and working 100% on Node?
RD: Yeah - Node and projects based on Node. It’s great.
OP: I guess the question is more about the commercial nature of Node and
commercialization of Node. Clearly Joyent have an interest in it, being
a hosting company, but do you see an ecosystem of businesses emerging
around Node at some point and if so what types of businesses are these
likely to be?
RD: One obvious thing is hosting of applications in an simple way
like Heroku is doing. Node opens the door to independent contractors
making little real-time websites for people -- so there’s that
OP: You don’t have an interest in building a service on top of Node?
Rather you wish to maintain the core project?
RD: I work for Joyent, so I work on products for them, but my main
interest is making Node perform well and make users happy.
OP: So the last couple of questions are a bit more abstract. The first
one is about the asynchronous nature of Node. Do you see event driven
webapps becoming more prevalent in the future? Not only Node, but
asynchronous webapps in general.
RD: Yeah, definitely. Not waiting for a database is a big win in
terms of performance - the amount of bagage associated with each TCP
stream is just much smaller. We need that for real-time applications
where many mostly-idle connections are being held. But even for normal
request response websites I think we’ll see more asynchronous setups
just because of the performance wins - even if it’s necessary. It’s
clear that asynchronous servers perform better in almost every way, it’s
difficult to ignore that.
for doing such stuff than other languages.
RD: There are of course efficient green thread and coroutine
implementations which allow you to write asynchronous code in a
synchronous looking way - Eventlet for example. I’m not convinced that’s
the right approach, I think it’s a leaky abstraction. There’s no
abstraction with callbacks - it’s a rather direct translation from the
interface the operating system gives.
CoffeeScript, they do callbacks and deferred and stuff like that.
RD: CoffeeScript is cute. I’m not convinced by CoffeeScript’s
deferred thing. I haven’t used it but it seems maybe that it will
confuse the users.
there are a lot of things wrong with it, but it’s an important language
and it’s set in stone by its ties to the browser.
OP: So all these tools, like debugging ...
RD: CoffeeScript is beautiful but it makes programming more
difficult. If there was more toolage around CoffeeScript, like a
debugger which translated line numbers from compiled code to CoffeScript
lines, it will be interesting. For myself, Node is already buggy enough,
another layer hurts rather than helps. The deferred concept is
interesting, basically when you put in a deferred keyword before a
function call, the rest of the current callback is put into a callback
as the last parameter to the deferred call. Wonder how that’s going to
work out - it seems too simplisitic. It’s kind of cute that you still
have the same programming model. I mean, it’s not the same as what’s
happening for coroutines or green threads, there’s still only one
execution stack. Who knows, maybe CoffeeScript’s deferred keyword will
end up working out well, I’m skeptical though.
OP: So, last question. Which is sort of two questions rolled into one
really. At the last talk you gave the other day you mention that your
view of a program is that it’s a set of inputs of data from various
sources, somehow transforms that data and forwards it on. Can you
elaborate on that a bit?
RD: Yeah, I think most of the programs, or a large part of the
programs that we write, are just proxies of some form or another. We
proxy data from a database to a web browser, but maybe run it through a
template first and put some HTML around or do some sort of logic with
it. But largely, we’re just passing data from one place to the other.
It’s important that Node is setup to pass data from one place to the
other efficiently and with proper data throttling. So that when data is
coming in too quickly from the database, that you can stop that incoming
flow. Suppose it’s over a TCP connection, you can just stop reading from
that data source and not fill up your memory with the whole response.
Start sending out the first part of the template that you’re sending to
the web browser and then pull in more data from the DB. You know, it
must properly shuffle the data through the process without blowing up
the memory if one side is slower than the other. You shouldn’t have to
pull down the entire table, put into a template and then send it out. It
should just be able to flow through your system, so creating an
environment where it’s easy to setup these flows in the proper way is
important. We’re not there yet, but that’s kind of my vision of what
Node will be. Lots of shuffling of data from one file descriptor to the
next, without having to buffer a ton of data.
OP: So in a way you can look at different Node instances talking to each
other, forming a graph with directed edges between the different nodes?
Is that where the name Node comes from? How did you come up with the
RD: I used the name “Node” because I envision it as one part of a
larger program. A program is not a process, a program is a database plus
an application plus a load balancer and Node is one node of that. It’s
not necessarily a bunch of NodeJS instances but a couple Node.js
instances plus some other things.
OP: Sounds good! Thank you for taking the time to chat.
This interview was conducted by [Oleg
Podsechin](http://twitter.com/olegpodsechin) with [Ryan
Dahl](http://tinyclouds.org/) on the 8th of July, shortly after Ryan's
an IT consultancy.
OP: The first question is an introduction really. How did you arrive at
RD: I was a contractor and I was doing various little C projects
usually involving server and event driven software and I realized that I
was doing this same code over and over. C is a nice language to work in,
but I wanted something I could script in the same way that I was
programming these servers.
RD: A little. I used to work a lot with Ruby on Rails - so I’d often
be dealing with front-end code. Back then I wrote a little Ruby web
server called Ebb that was meant to be a faster Mongrel. That code was
the starting point for Node.
OP: Ebb was mostly in C right? So you went from writing it in Ruby, then
writing it in C and now you’re sort of ending up writing it in
RD: Right. So what originally was Ruby turned into C. For a while I
toyed with the idea of having a small web server C library - but it’s
is exactly the language that I’m looking for here.” That happened
shortly after V8 being released.
OP: You’ve said that there are two languages that will always be around:
purposed programming language?
different than other dynamic languages, namely that it has no concept of
threads. Its model of concurrency is completely based around events.
This makes it rather different than other general purpose dynamic
programming languages like Ruby and Python. At least for certain classes
example when writing an IRC server.
becoming increasingly more prevalent, not only on servers but also on
desktop application language.
encourages people to dump everything into global variables. That’s a
overcome that sort of thing.
OP: So, did you follow the whole discussion around EcmaScript 4 and
RD: I like Crockford’s opinion that the language should be kept
didn’t have many predefined ideas about how to do stuff - particularly
for I/O. Although EcmaScript 4 didn’t define any I/O, it did define a
lot. It did make a lot of breaking changes. That said, I wish EcmaScript
5 did have a few more features.
OP: Any particular ones in mind?
RD: What’s this called? Destructive assignment? If you have an array
on the right and a list of variables on the left, and they can be define
that way. That would be nice to have.
OP: That’s included in Rhino, but not in V8
OP: So let’s move on to Node itself. what is the most difficult design
decision you made with regards to the project?
RD: Something that was very hard for me was ... my original idea was
that it was going to be purely non-blocking system and I’ve backed out
of that a big in the module system and a few other areas. In the browser
when the scripts are completely evaluated until an onLoad callback is
made. Originally Node was similar. You would load a bunch of module
files and you wouldn’t know that they were fully interpreted, fully
evaluated, until a “loaded” event was . This made things a bit
complicated. You couldn’t just do “require” and start using that stuff
right below it, you had to wait for the callback to do that.
OP: The hello world app would have one more indentation.
OP: But it’s funny because people say that one of the benefits
run the same validation logic on the server and browser, but the
CommonJS module spec doesn’t work within the browser, so there are these
efforts to try and make frameworks with asynchronous module loading.
RD: Right, so in terms of difficult design decisions, I wanted Node
to be browser-like. Maybe it didn’t use the same methods but the same
structures could be ported easily, aliasing methods to the browser ones.
Originally Node achieved that--it was totally browser-like. Originally,
it even had a ‘window’ object. I slowly backed off that API as it became
clear it wasn’t necessary to have the server-side environment be exactly
the same. So I went with the CommonJS module system which was rather
reasonable; the CommonJS people had put a lot of thought into it and I
didn’t really want to worry about modules so much. So yeah, require is
blocking and there are some other minor things that are blocking in
Node. Generally this pragmatic approach of being non-blocking 99% of the
time, but allowing a few synchronous operations here and there has
worked out well. It probably doesn’t matter for a server-side program if
you load modules synchronously.
Part 2 of this interview will be posted tomorrow (Wednesday 11th