GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. – http://graphql.org/
This guide will walk through installing and creating a GraphQL API in a Ruby on Rails application. It is a companion piece to the excellent Getting Started with Rails guide from RailsGuides. There are lots of resources that teach GraphQL-itself better than I can, so this guide just focuses on actually installing and using it in a Rails app.
Let’s first verify that the schema of your app is what is expected for this guide. We will start off right were the RailsGuides guide leaves off. You should have
comments tables with the following schema:
Since GraphQL uses knowledge of relationships between types, we must first defined the types we will allow to be queries. In this case, Article and Comment. Add the following article and comment type files to
Rails, by default, will protect your application from a particular type security risk called a “cross-site request forgery”
A side-effect of this is that you cannot POST to the graphql endpoint unless we temporarily disable this feature, by modifying your application_controller to read:
Please note that this is an important feature, and should only be disabled during developments, or if you want to create a completely public api endpoint.
Wrapping up & Next Steps
At this point we have installed GraphQL into our Rails app, created schemas for Articles and Comments, and written query types to return that data from our application, as well as testing the real API endpoint using postman.
From here you can take any path to consume the data, such as through a front-end application like a React or Angular application, or exposing the endpoint publicly!
Some next steps may be:
investigate a front-end graphql client, like Apollo
Create a class factory which allows dynamic super classes to by applied to an es6 class (dynamic inheritance). This will be transparent to a consumer — meaning they will interact with it in the same way as the initial class.
I was recently working on some code which extended an asynchronous module to add some additional functionality. The class initially looked like this:
import Fetcher from'fetcher'
//do some extra work
However, testing a module like this is difficult for two reasons.
The async nature of the behavior is challenging to test and mock
The explicit super class Fetcher which is imported in the module is difficult to override without reaching down into the prototype and manually modifying methods during the test. Yuck!
Wrap the extending (child) class declaration with a function, which accepts a parameter for its super class, and a default parameter value. Then the inner class can be declared using the SuperClass parameter. This will allow the child class to reference either its default parent, or the custom super class explicitly passed in.
The child class then extends or overwrites any methods from super that are required. Finally, a new’d up instance of the child class is returned with the remaining arguments passed to the class factory function.
For consistency of use, we can force the function to be called using new, making the abstraction completely transparent to any consumer.
If you’re looking for the steps to include static files, like an mp3, to a create-react-app project — and use es6 import statements to use them in a component, here are the steps. This process will involve “ejecting” from create-react-app, so think about the implications of that before proceeding.
1) Eject from the create-react-app by running npm run eject. The create-react-app documentation describes the eject command
If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
2) Include the files you want to use somewhere in the src folder. Probably with the component which will be consuming them.
3) We need to modify our dev and prod webpack configs to allow the files to be import-ed by through webpack and included in the build folder. Add an entry to the exclude list which will test for the file. The exclude list should look like this:
Add a loader to the list of loaders for the file type you want to use:
Do this for both the dev and prod webpack files in the config folder.
3) Within the component file, you can now import the file you with to use, specifying the relative path to the file
import soundFile from'./file.mp3';
4) Then you can use the file within your component. For the example of a mp3 file, you can create a new Audio element in the constructor, and play or pause it using some event handler and function
A primary goal of data visualization is to communicate information clearly and efficiently via statistical graphics, plots and information graphics. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message.Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable and usable.
TL;DR: Use data visualization too:
Communicate information clearly
Visual communicate a quantitative message
Help users analyze and reason about data and evidence
It makes complex data more usable
Good and Bad Examples
A pie chart that shows the 100 most active “tweeters” of a particular hashtag? Terrible visualization. Measuring it against our criteria, it doesn’t effectively communicate a message, or enable any type of reasoning about the data — other than to reason that 100 is far to many data points for a pie chart.
Almost never use a pie chart -everyones stats teacher
A tree chart with shows the worlds defense budgets? Great visualization. It is clear and understandable, and illustrates a message to the viewer.
A map of the 2012 Electoral College results by state? I believe this is an excellent visualization. It shows not only the states votes, but shows the weight each state has in the electoral college. It is accessible to the viewer.
A chart of the 100 most popular websites per month? At a quick glance, which is most popular? Is it the largest, or closest to the center? Which is the largest anyway? This is an ineffective visualization.
Technologies to visualize data on the web
There are three ways the data can be presented/visualized on the web
Images: Visualization is designed in a tool like photoshop, illustrator, or tableu
CSS: Css properties, such as width or height determine the shape and size of objects which can represent data points.
There are many popular visualization and charting libraries. Three very popular and robust are:
Explanation (commented source)
//creates a `search` function, which accepts a needle (property name, string),
//a haystack (the object to search within), found (the recursively added to
//list of found properties
//iterate through each property key in the object
//if the current key is the search term (needle),
//push its value to the found stack
//return the array of found values to the caller, which is
//either the caller of the search function, or the recursive
//"parent" of the current search function
//if the value of the current property key is an object,
//recursively search it for more matching properties
//this can be changed to an else if, if properties should not
//return the list of found values to the caller of the function
My organization has early access to Microsoft Planner through our Office365 account, and I’ve been able to evaluate it compared to what we currently use: Trello. Spoiler alert: This post is fairly critical of Planner — in a Planner vs Trello head-to-head, trello wins hands down.
1. Broken Comments
The first thing I tried in Microsoft Planner was to leave a comment on a card. It broke. Bad first experience.
2. No Markdown support anywhere
Markdown syntax is not supported in comments, descriptions, or checklists.
3. Only 1 person can be assigned to a card
A key feature of Trello for my team is that multiple people can be assigned to a card at any given time. This fits our workflow. Forcing a 1-card-1-person relationship arbitrarily enforces a workflow on users.
4. Bugs in other browsers
Microsoft Planner doesn’t seem to play nice in Safari. It is compatible with IE/Edge… perhaps that is what it was developed for as well.
5. Cannot Edit Comments
Once comments are posted in Microsoft Planner, it is impossible to edit them.
6. No Card Numbers
There are no visible card numbers in planner. While they are also hidden in Trello, there are multiple plugins and options to surface them to the user.
7. No plugins or extensions
While this is technically not a failure of Microsoft Planner, community support does matter. Trello has thousands of community developed extensions that enhance and customize the interface to make it perfectly suited for a given team.
8. Character Limit in Checklists
Don’t plan on being verbose in your checklists in Microsoft Planner — there is a 100 character limit per item.
9. No 3rd Party Integrations
An important part of my Trello workflow is integration with Slack. Trello has 100’s of integrations with other services. Microsoft Planner is still new to the game, and there are no integrations yet.
10. Access tied only to Office365
I have personal and family trello boards, as well as work related boards, all linked together under the same account. The only access point a user can have with Microsoft Planner is with your organizations Office365 — meaning that vendors or contractors which aren’t part of your organization are out of the loop.
11. Limited Labels
Microsoft Planner seems to have a hard limit of 6 labels for a board. This is a pretty small set, and totally unusable if — like my team — you support dozens of applications.
12. Moving Cards Between Boards
It is impossible in Microsoft Planner to move cards between boards or projects. This is an essential function of Trello which allows us to move items between different teams boards, if the ownership of projects change, or cards are escalated to different groups.
13. Archiving Completed Tasks
In Microsoft Planner, “completed” cards are hidden, but still listed in each section. There is no “archive” section where you can move completed tasks to remove them from your board.
14. Board Customization
There doesn’t seem to be a way in Microsoft Planner to customize the board in any way, such as background colors or other branding.
15. Multiple Checklists
Trello allows you to have multiple checklists per card, breaking up tasks into groups. Microsoft Planner only has one checklist per card
The basic gist is that a circle is drawn, then another circle is drawn on top of it at coordinates centered on a point on the prior circle circumference, with the number of degrees and the radius being incremented or decremented each recursive cycle. The rate of change in degrees or radius is then modified and redrawn on a time interval to create an animated effect.
Each recursive circle is placed on a point on the prior circles circumference.
Make your webpage blazingly fast! This beginners guide includes 14 performance tips (from simple to complex) that will help you speed up any site.
Above is a video version of this guide. It goes in-depth into many of the topics, and includes a few more. The presentation was given to a group of web developers at The Church of Jesus Christ of Latter-day Saints, as is slightly tailored to some of their web properties. Enjoy!
This guide is loosely arranged in order by bang-to-buck ratio, so simpler optimizations are listed first (that generally have a immediate impact).
It’s not uncommon for an image straight from photoshop or a digital camera for weigh several megabytes. Large and unoptimized images can be the number 1 cause of slow loading pages. Luckily, this may be the simplest optimization as well.
This vacation image (right from my digital camera) is 12.8m. It is also 4000×6000 — which is far too large for a webpage. Reducing the pixel size in photoshop to 1000×667 takes the file size down to 480.7kb. This is still far too large. In the Photoshop Save For Web menu, we can usually reduce the jpeg quality to between 60%-70% without noticing any artifacting or degradation. My rule of thumb is to move the quality sider down until I notice a difference, then take it up by a third. I’ve reduced the jpeg quality on this image to 62% and the new file size in 145.9 kb. This is still large for the web, but more acceptable for a photograph.
We can further reduce the size by using other image optimization tools, such as ImageOptim. This tool reduces image file size by making imperceptible quality adjustments, as well as by removing EXIF and other metadata from the file.
2. Don’t use images
Images are only really needed for displaying photos, screenshots, or other graphics. With the advance of CSS3 support, there are almost no situations were image files are necessary for page layout or design.
linear-gradient alone have reduced the need for most of our image dependancies.
Icons were another common image use-case. We can now replace these with the more semantic use of icon fonts or svg “sprites.” Icomoon is an excellent tool for creating both icon fonts and svn icons. Chris Coyer puts icon fonts and svgs in a informative cage-match that excellently lays-out the pros and cons of both methods
Finally, the One Less JPG movement reminds us that “before you go worrying about how to minify every last library or shave tests out of Modernizr, try and see if you can remove just one photo from your design. It will make a bigger difference.”
3. Lazy-load images
There are many expertly programmed lazy-load images scripts, so I’ll just outline the logic here:
isthe current element top near the bottom of the viewport?
set the background style tothe data-background value
remove the element from the set of lazy-background elements
The order and placement of CSS files and Scripts is important. CSS should be placed in the head element. As soon as the styles are downloaded and parsed, they can be applied to their DOM element matches — increasing perceived load time.
5. Load scripts asynchronously
If a page is built with progressive-enhancement in mind, most scripts should be loaded asynchronously, or after the page itself. These scripts will enhance the experience of the users, and won’t be missed for the split-second before they are loaded.
Your code can get a little more verbose when iterating over a set of elements:
Ditching the library isn’t a hard-and-fast rule. There are many things that jQuery provides that are difficult to achieve with out.
For example, jQuery’s event binding interface can be overloaded to provide event delegation — binding an event handler to an element’s parent in order to allow dynamic loading or modification of new child elements without losing the event handler.
//Adds a event handler to tables, listening to clicks on children tr elements
Here is a native event delegation function I wrote to add this functionality natively:
document.getElementById("log").innerHTML+=".child was clicked</br>";
As you can see, much of the native implementation deals with the inconsistencies in browser implementation. jQuery and other libraries “pave over” these differences, and provide a consistent API for developers. Additionally, library implementation code has been tested by many users over many browsers. Library usage benefits from thousands of hours of bug and edge-case fixes.
When deciding whether or not to ditch the library, you need to carefully assess the needs of your site. Are there complex requirements or compatibility needs? Do you have time to write and test native implementation? There are certainly use cases where a library is not needed, but consider carefully before you make that decision.
7. Minify and Concatenate
Minification is the process of compressing files, removing unnecessary whitespace, line-breaks, and comments. It is often coupled with Uglification, which transforms code by renaming variables, and refactoring functions into the most compressed format. Any CSS or JS that makes its way to an end user should be both minified and uglified.
Concatenation is the process of combining multiple files into one. This reduces the number of requests, and is especially useful when optimizing for mobile networks where latency is often the bottle-neck.
There are many good stand-alone minification and concatenation tools, but I recommend integrating both of these processes into your build-step, using Grunt, Gulp, or similar.
8. Take advantage of browser caching
Asset caching is controlled via to headers sent with the file:
ETag. These headers are sent by the server with the file response.
Cache-Control defines how, and for how long the individual response can be cached by the browser and other intermediate caches. To learn more, see caching with Cache-Control.
ETag provides a revalidation token that is automatically sent by the browser to check if the resource has changed since the last time it was requested. To learn more, see validating cached responses with ETags.
There are different ways to set these headers depending on your website platform. This guide to caching headers does a good job of explaining the different options available for the header. Ultimately, you will need to use a method that is correct for you specific platform.
9. Enable compression
This is a simple optimization, and comes enabled on most servers by default. By enabling gzip compression, assets served are run through a compression algorithm and decompressed by the client. The details of how to enable gzip compression are different for each platform, so find a guild that is specific to your technology to install or enable it.
10. Take advantage of server caching
As dynamic pages are built, there can be many database requests for each page. Consider the typical blog: the article content, related stories, and comments are each separate database queries. The comments will be updated frequently, but the article itself and the related stories will not change once published. We can save the results of these queries in memory or on the disk for future use (saving a expensive database query). Like browser caching, the implementation will vary from platform to platform, but the theory is the same. Each platform will have it’s community recommended caching solution, and honestly, the bang-to-buck ratio of creating your own is slim.
The logic of server caching is generally:
receive request for"page-2"
does"page-2.html"file exist on the file system?
Build"page-2"page from database andtemplates
save thisas"page-2.html"on the file system
This pseudocode saves the cached page to the file system, but you could just as easily store data in memory or in an object-store. Google “caching in [your system]”, or read up on Varnish, memcached, redis for more information.
11. Optimize database queries
Expensive database queries can add hundreds of milliseconds to the lifecycle of a request. Optimizing queries is the process of:
Identifying expensive database queries
Identifying expensive queries
In general, look for queries that ask for more then they need. This can take the form of
SELECT * , when all you actually need is
SELECT first_name . Any query that returns more than you need, JOINs with more than is needed, or contains correlated subqueries should be suspect. Every database system is different, so there is silver-bullet when it comes to identifying bad queries.
Once bad queries are identified, use the database tools to understand the workings of the query. Many databases support an
EXPLAIN keyword that returns information about how a query is interpreted and executed by the database engine.
With the help of EXPLAIN, you can see where you should add indexes to tables so that the statement executes faster by using indexes to find rows. You can also use EXPLAIN to check whether the optimizer joins the tables in an optimal order. – MySQL Documentation
12. Separate static assets onto a different domain
You don’t necessarily need to host your static files on a different server, or even in a different application. You can configure most servers to use a separate path and domain for a folder using the rewrite rules system for your particular platform.
13. Use a Content Delivery Network
Content Delivery Networks, or CDN’s, store contents of your website on servers that are more physically close to the end users location. When a request for an asset is made, the CDN will look on it’s servers to see if it has a cached copy. If so, it will return that to the client — avoiding any traffic to your servers completely. If the file is not available, the CDN will request the asset from the server, store it, and then send it to the user. Not only does this decrease load on your server, but it speeds up the request by delivering the asset from a physically nearby server.
Akamai, and AWS both have very popular commercial offerings. However, you don’t have to be Facebook-scale to use a CDN. One company, CloudFlare, offers free CDN access for websites that server under a certain amount of traffic. Cloudflare’s product setup is relatively simple: you edit your DNS to point to their nameservers, and all traffic is routed through their network. Assets are cached as they are requested, and served to users from their servers around the world.
14. Flush partial responses
Some background: during a request lifecycle, there are three main functions that take the majority of the time to build a page:
Network transmission time & latency
Building the page on the server
Rendering the page on the client
Often, as each part of the cycle is busy working, the other two are idle. We can perform this optimization to utilize the other two, and reduce the total time of the request.
Consider a request for “page-2”:
the request is sent from the client
the server receives and routes the request
page renderer is found
database queries are made
page is built
page is sent to the client
page is received by the client
client renders the page
client makes requests for linked assets
Now, during step 4-6, the network and client are not utilized. Flushing a partial response means starting to send data over the network to the client before the entire response is ready. Segmenting a response in this way “pipelines” the process and decreases the time a client waits to receive the first response packet.
Each platform is different, but this pseudocode shows the general theory:
Flushing the output of the page in this way allows the client to receive the header before the entire page has finished building, and it may start rendering immediately.