Adding GraphQL to Ruby on Rails

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. – http://graphql.org/

This guide will walk through installing and creating a GraphQL API in a Ruby on Rails application. It is a companion piece to the excellent Getting Started with Rails guide from RailsGuides. There are lots of resources that teach GraphQL-itself better than I can, so this guide just focuses on actually installing and using it in a Rails app.

Initial Schema

Let’s first verify that the schema of your app is what is expected for this guide. We will start off right were the RailsGuides guide leaves off. You should have   articles  and comments  tables with the following schema:

Install GraphQL Gem

The GraphQL gem is located at https://rubygems.org/gems/graphql. Find the latest version of the gem, and add it to your Gemfile:

Then run the graphql install command in your terminal:

This should modify the Gemfile to add a development tool GraphiQL, created necessary files in the application, and add GraphQL routes to the route file.

Let’s do a quick status check to see if everything is working so far. Start the rails server, and visit the GraphiQL route,  localhost:3000/graphiql.

You should see the GraphiQL user interface.

Adding Schema

Since GraphQL uses knowledge of relationships between types, we must first defined the types we will allow to be queries. In this case, Article and Comment. Add the following article and comment type files to  /graphql/types

These files define the schema and the relationship between Articles and Comments in a way that GraphQL can understand.

Adding Queries

Let’s add a article query to the api. Open  /graphql/types/query_type.rb  and add the following query below the  :testField query:

You should now be able to execute a query against GraphQL using the Graphiql tool

Here is the text of the query for your copy+paste convenience:

Adding a query to return all articles is just as simple. Add the following query type:

Using the endpoint

Up to this point we’ve used the GraphiQL tool to build and test our API. Let’s move to the actual graphql endpoint, located at /graphql

To test the endpoint you can use a tool like Postman to make requests to the app.

In postman, Let’s make a request to  localhost:3000/graphql (remember to set the header  Content-Type: application/json).

Set the request body to:

In my case, the response is:

Testing errors

Unless you’ve already configured your application differently, at this point it is likely that you will run into the error

Rails, by default, will protect your application from a particular type security risk called a “cross-site request forgery”

A side-effect of this is that you cannot POST to the graphql endpoint unless we temporarily disable this feature, by modifying your application_controller to read:

Please note that this is an important feature, and should only be disabled during developments, or if you want to create a completely public api endpoint.

Wrapping up & Next Steps

At this point we have installed GraphQL into our Rails app, created schemas for Articles and Comments, and written query types to return that data from our application, as well as testing the real API endpoint using postman.

From here you can take any path to consume the data, such as through a front-end application like a React or Angular application, or exposing the endpoint publicly!

Some next steps may be:

  1. investigate a front-end graphql client, like Apollo
  2. fix inefficient queries with batching
  3. look into implementing security

Dynamic super classes (extends) in ES6

tl;dr

Create a class factory which allows dynamic super classes to by applied to an es6 class (dynamic inheritance). This will be transparent to a consumer — meaning they will interact with it in the same way as the initial class.

Problem statement:

I was recently working on some code which extended an asynchronous module to add some additional functionality. The class initially looked like this:

However, testing a module like this is difficult for two reasons.

  1. The async nature of the behavior is challenging to test and mock
  2. The explicit super class Fetcher which is imported in the module is difficult to override without reaching down into the prototype and manually modifying methods during the test. Yuck!

Solution:

Wrap the extending (child) class declaration with a function, which accepts a parameter for its super class, and a default parameter value. Then the inner class can be declared using the SuperClass parameter. This will allow the child class to reference either its default parent, or the custom super class explicitly passed in.

The child class then extends or overwrites any methods from super that are required. Finally, a new’d up instance of the child class is returned with the remaining arguments passed to the class factory function.

For consistency of use, we can force the function to be called using new, making the abstraction completely transparent to any consumer.

Now we can call the extending class (with no SuperClass property) and it will have its default and expected behavior:

Or, with the SuperClass property to dynamically set its parent:

Working Example

 

See the Pen dynamic super class by Michael Jasper (@mdjasper) on CodePen.

Adding mp3 files to a Create React App app

If you’re looking for the steps to include static files, like an mp3, to a create-react-app project — and use es6 import statements to use them in a component, here are the steps. This process will involve “ejecting” from create-react-app, so think about the implications of that before proceeding.

1) Eject from the create-react-app by running npm run eject. The create-react-app documentation describes the eject command

If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

Instead, it will copy all the configuration files and the transitive dependencies (Webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

2) Include the files you want to use somewhere in the src folder. Probably with the component which will be consuming them.

3) We need to modify our dev and prod webpack configs to allow the files to be import-ed by through webpack and included in the build folder. Add an entry to the exclude list which will test for the file. The exclude list should look like this:

Add a loader to the list of loaders for the file type you want to use:

Do this for both the dev and prod webpack files in the config folder.

3) Within the component file, you can now import the file you with to use, specifying the relative path to the file

4) Then you can use the file within your component. For the example of a mp3 file, you can create a new Audio element in the constructor, and play or pause it using some event handler and function

 

Simple React Examples

I ❤️  react, and have spent a bit of time teaching react concepts to others through bootcamp trainings, and university courses.

Here are a few examples of concepts and code I’ve written for students that might be useful for you too:

Simple Composable Drawer Component

Illustrates simple props.children composition, as well as composition with stateful components.

  • Smart component
  • composition
  • children

TODO app

Stateful component composition with user generated data. Illustrates animating-out changes to state when todos are removed from state.

  • Smart components
  • component lifecycle
  • event binding

Search with debounced input

Search-preview of the Google Books api. Illustrates a very simple debounce implementation for network request performance.

  • fetch
  • composition
  • debounce
  • performance

Higher Order Components

Use a higher order component to generalize a common task of fetching and loading data into a component.

  • composition
  • fetch
  • promises

props.children composition

Very simple composition example.

  • containers
  • composition

 

Getting Started with Data Visualization in React

A primary goal of data visualization is to communicate information clearly and efficiently via statistical graphics, plots and information graphics. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message.Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable and usable.

-https://en.wikipedia.org/wiki/Data_visualization

TL;DR: Use data visualization too:

  1. Communicate information clearly
  2. Visual communicate a quantitative message
  3. Help users analyze and reason about data and evidence
  4. It makes complex data more usable

Good and Bad Examples

A pie chart that shows the 100 most active “tweeters” of a particular hashtag? Terrible visualization. Measuring it against our criteria, it doesn’t effectively communicate a message, or enable any type of reasoning about the data — other than to reason that 100 is far to many data points for a pie chart.

Almost never use a pie chart -everyones stats teacher

A tree chart with shows the worlds defense budgets? Great visualization. It is clear and understandable, and illustrates a message to the viewer.

A map of the 2012 Electoral College results by state? I believe this is an excellent visualization. It shows not only the states votes, but shows the weight each state has in the electoral college. It is accessible to the viewer.

A chart of the 100 most popular websites per month? At a quick glance, which is most popular? Is it the largest, or closest to the center? Which is the largest anyway? This is an ineffective visualization.

Technologies to visualize data on the web

There are three ways the data can be presented/visualized on the web

  1. Images: Visualization is designed in a tool like photoshop, illustrator, or tableu
  2. CSS: Css properties, such as width or height determine the shape and size of objects which can represent data points.
  3. Javascript: Using a javascript library that will create SVG or Canvas elements, and add interactive behavior to them

Javascript visualization libraries

There are many popular visualization and charting libraries. Three very popular and robust are:

D3

http://d3js.org

https://www.npmjs.com/package/react-vis

Vis

http://visjs.org/

https://www.npmjs.com/package/react-graph-vis

Highcharts

https://www.highcharts.com/

https://www.npmjs.com/package/react-highcharts

Building a data driven chart in React step-by-step

We’re going to use Facebook’s excellent Create-react-app starter kit, so first, let’s create a clean install of CRA:

Then install dependancies

Let’s use d3 with the React-vis wrapper. First install the react-vis package into our project

At this time, there is an unmet dependency problem with will prevent the app from compiling, so also install the peer dependency

Finally, run the project

Now the project is running, we can consume the react-vis package in our app. In the “App.js” file (the main component rendered by the app at this time), import the react-vis components and css.

We are now ready to use the components from the react-vis package in our App. Somewhere in your app insert a element:

We can now look at the page in our browser and see the chart rendered:

Creating an API for our chart to consume

 

 

Modern 100% height divs

In 2011 I published a popular article, 100% height Divs with 2 lines of code, which showed how to use JavaScript to get around the difficult layout problem of having two elements side-by-side both at 100% height of their container. In 2012, an update was published which showed a similar solution using jQuery. Both of these methods are now obsolete, and modern css solves the problem quite simply.

Enter flex-box.

Flex-box, now widely supported, allows complex layouts – like the 100% problem – to be solved simply and elegantly. A great post (CSS-Tricks has a great guide to flex-box (which I reference frequently))

Here is an example of a simple 2 column layout using flex-box:

See the Pen bpzQjo by Michael Jasper (@mdjasper) on CodePen.

And another example of a more complex layout using flex-box:

See the Pen XdOyay by Michael Jasper (@mdjasper) on CodePen.

And finally, an example of flex-box to build an entire page layout:

See the Pen Flexbox golden layout by Michael Jasper (@mdjasper) on CodePen.

 

ES6 Depth-first object (tree) search

A co-worker recently asked about searching for properties in a JavaScript object, where the properties could be located in unspecified locations within the object (not nested though). This was the job for a depth-first tree search! I don’t get to post about more traditional computer science topics very frequently, so I thought I’d share my little ES6 recursive search function:

Explanation (commented source)

Example

This code can be used like so:

 

Microsoft Planner vs Trello

planner-vs-trello

 

My organization has early access to Microsoft Planner through our Office365 account, and I’ve been able to evaluate it compared to what we currently use: Trello. Spoiler alert: This post is fairly critical of Planner — in a Planner vs Trello head-to-head, trello wins hands down.

1. Broken Comments

The first thing I tried in Microsoft Planner was to leave a comment on a card. It broke. Bad first experience.

microsoft-planner-broken-comments

2. No Markdown support anywhere

Markdown syntax is not supported in comments, descriptions, or checklists.

microsoft-planner-markdown

3. Only 1 person can be assigned to a card

A key feature of Trello for my team is that multiple people can be assigned to a card at any given time. This fits our workflow. Forcing a 1-card-1-person relationship arbitrarily enforces a workflow on users.

4. Bugs in other browsers

Microsoft Planner doesn’t seem to play nice in Safari. It is compatible with IE/Edge… perhaps that is what it was developed for as well.

5. Cannot Edit Comments

Once comments are posted in Microsoft Planner, it is impossible to edit them.

6. No Card Numbers

There are no visible card numbers in planner. While they are also hidden in Trello, there are multiple plugins and options to surface them to the user.

7. No plugins or extensions

While this is technically not a failure of Microsoft Planner, community support does matter. Trello has thousands of community developed extensions that enhance and customize the interface to make it perfectly suited for a given team.

8. Character Limit in Checklists

Checklist items can be no longer than 1 line length
Checklist items can be no longer than 1 line length

Don’t plan on being verbose in your checklists in Microsoft Planner — there is a 100 character limit per item.

9. No 3rd Party Integrations

An important part of my Trello workflow is integration with Slack. Trello has 100’s of integrations with other services. Microsoft Planner is still new to the game, and there are no integrations yet.

10. Access tied only to Office365

I have personal and family trello boards, as well as work related boards, all linked together under the same account. The only access point a user can have with Microsoft Planner is with your organizations Office365 — meaning that vendors or contractors which aren’t part of your organization are out of the loop.

11. Limited Labels

Microsoft Planner seems to have a hard limit of 6 labels for a board. This is a pretty small set, and totally unusable if — like my team — you support dozens of applications.

No more than 6 labels per board is very limiting
No more than 6 labels per board is very limiting

12. Moving Cards Between Boards

It is impossible in Microsoft Planner to move cards between boards or projects. This is an essential function of Trello which allows us to move items between different teams boards, if the ownership of projects change, or cards are escalated to different groups.

13. Archiving Completed Tasks

In Microsoft Planner, “completed” cards are hidden, but still listed in each section. There is no “archive” section where you can move completed tasks to remove them from your board.

14. Board Customization

There doesn’t seem to be a way in Microsoft Planner to customize the board in any way, such as background colors or other branding.

15. Multiple Checklists

Trello allows you to have multiple checklists per card, breaking up tasks into groups. Microsoft Planner only has one checklist per card

 

 

Recursive spiral art with canvas+javascript

Demo

(Hide yo kids, hide yo RAM)

See the Pen Fractal with inner circles by Michael Jasper (@mdjasper) on CodePen.

Explanation

The basic gist is that a circle is drawn, then another circle is drawn on top of it at coordinates centered on a point on the prior circle circumference, with the number of degrees and the radius being incremented or decremented each recursive cycle. The rate of change in degrees or radius is then modified and redrawn on a time interval to create an animated effect.

Drawing circles

Each recursive circle is placed on a point on the prior circles circumference.

Deriving a point
Deriving a point

x2 and y2 are found using the parametric equation of a circle

x2 = the new circle x center point

x1 = the prior circles x center point

r = the radius of the prior circle

a = the angle of the point on the circumference (in radians)

Illustrated below is the progression from circle to circle, creating the beginning of a spiral pattern. By adjusting the change of the angle, the spiral gets more “tight” or more “open”.

 

circle-progression

The ultimate beginners guide to web performance and speed

Make your webpage blazingly fast! This beginners guide includes 14 performance tips (from simple to complex) that will help you speed up any site.

Above is a video version of this guide. It goes in-depth into many of the topics, and includes a few more. The presentation was given to a group of web developers at The Church of Jesus Christ of Latter-day Saints, as is slightly tailored to some of their web properties. Enjoy!

This guide is loosely arranged in order by bang-to-buck ratio, so simpler optimizations are listed first (that generally have a immediate impact).

  1. Optimize images
  2. Don’t use images
  3. Lazy load images
  4. CSS at the top, Javascript at the bottom
  5. Load scripts asynchronously
  6. Ditch the library
  7. Minify and Concatenate
  8. Take advantage of browser caching
  9. Enable compression
  10. Take advantage of server caching
  11. Optimize database queries
  12. Separate static assets onto a different domain
  13. Use a content delivery network
  14. Flush partial responses

1. Optimize images

It’s not uncommon for an image straight from photoshop or a digital camera for weigh several megabytes. Large and unoptimized images can be the number 1 cause of slow loading pages. Luckily, this may be the simplest optimization as well.

This vacation image (right from my digital camera) is 12.8m. It is also 4000×6000 — which is far too large for a webpage. Reducing the pixel size in photoshop to 1000×667 takes the file size down to 480.7kb. This is still far too large. In the Photoshop Save For Web menu, we can usually reduce the jpeg quality to between 60%-70% without noticing any artifacting or degradation.  My rule of thumb is to move the quality sider down until I notice a difference, then take it up by a third. I’ve reduced the jpeg quality on this image to 62% and the new file size in 145.9 kb. This is still large for the web, but more acceptable for a photograph.

example photo for optimization
Optimized vacation photo

We can further reduce the size by using other image optimization tools, such as ImageOptim. This tool reduces image file size by making imperceptible quality adjustments, as well as by removing EXIF and other metadata from the file.

optimizing images with imageoptim software
ImageOptim reducing image file size

2. Don’t use images

Images are only really needed for displaying photos, screenshots, or other graphics. With the advance of CSS3 support, there are almost no situations were image files are necessary for page layout or design.  border-radius and   linear-gradient alone have reduced the need for most of our image dependancies.

Icons were another common image use-case. We can now replace these with the more semantic use of icon fonts or svg “sprites.” Icomoon is an excellent tool for creating both icon fonts and svn icons. Chris Coyer puts icon fonts and svgs in a informative cage-match that excellently lays-out the pros and cons of both methods

Finally, the One Less JPG movement reminds us that “before you go worrying about how to minify every last library or shave tests out of Modernizr, try and see if you can remove just one photo from your design. It will make a bigger difference.”

3. Lazy-load images

Inline images

If an image falls below the viewport (not immediately visible), load that image “lazily” — or after the main content has loaded. We can accomplish this simply and semantically, and in a way that still works for non-javascript users.

There are many expertly programmed lazy-load images scripts, so I’ll just outline the logic here:

CSS-tricks has an modern lazy-load script which seems to work well.

Background images

Lazy loading background images in CSS uses the same logic as inline images, with a slightly modified implementation:

4. Css at the top, Javascript at the bottom

The order and placement of CSS files and Scripts is important. CSS should be placed in the head element. As soon as the styles are downloaded and parsed, they can be applied to their DOM element matches — increasing perceived load time.

Included Javascript files will block the parsing and execution of a page, and should be placed at the bottom of a document to avoid the block.

5. Load scripts asynchronously

If a page is built with progressive-enhancement in mind, most scripts should be loaded asynchronously, or after the page itself. These scripts will enhance the experience of the users, and won’t be missed for the split-second before they are loaded.

Most popular libraries (read: jQuery) have built-in asynchronous script loading, but it is also simple to implement without a library. HTML5Rocks has a deep dive into async script loading. It’s long, but well worth the read.

6. Ditch the library

Speaking of popular front-end libraries: do you really need jQuery? The tongue-in-cheek site, Vanilla-JS offers up native javascript solutions for common jQuery uses. Indeed, many of the uses of jQuery can be reproduced by simple native JavaScript:

Your code can get a little more verbose when iterating over a set of elements:

 Case-by-case

Ditching the library isn’t a hard-and-fast rule. There are many things that jQuery provides that are difficult to achieve with out.

For example, jQuery’s event binding interface can be overloaded to provide event delegation — binding an event handler to an element’s parent in order to allow dynamic loading or modification of new child elements without losing the event handler.

Here is a native event delegation function I wrote to add this functionality natively:

As you can see, much of the native implementation deals with the inconsistencies in browser implementation. jQuery and other libraries “pave over” these differences, and provide a consistent API for developers. Additionally, library implementation code has been tested by many users over many browsers. Library usage benefits from thousands of hours of bug and edge-case fixes.

When deciding whether or not to ditch the library, you need to carefully assess the needs of your site. Are there complex requirements or compatibility needs?  Do you have time to write and test native implementation? There are certainly use cases where a library is not needed, but consider carefully before you make that decision.

7. Minify and Concatenate

If your CSS or JavaScript files are being served to a user in the same form that you authored them, then there is a performance penalty to pay for downloading all the unnecessary whitespace and descriptive naming conventions that make for good development.

Minification is the process of compressing files, removing unnecessary whitespace, line-breaks, and comments. It is often coupled with Uglification, which transforms code by renaming variables, and refactoring functions into the most compressed format. Any CSS or JS that makes its way to an end user should be both minified and uglified.

Concatenation is the process of combining multiple files into one. This reduces the number of requests, and is especially useful when optimizing for mobile networks where latency is often the bottle-neck.

There are many good stand-alone minification and concatenation tools, but I recommend integrating both of these processes into your build-step, using Grunt, Gulp, or similar.

8. Take advantage of browser caching

Once a user has visited a page on your site, assets have been downloaded and can be cached on the users system. Typically, static assets are cached: CSS, JavaScript, Image files, etc, as well as dynamic requests: search results that change infrequently, lookups, or other ajax requests.

Asset caching is controlled via to headers sent with the file:  Cache-Control and ETag. These headers are sent by the server with the file response.

From Google’s Leverage Browser Caching:

  • Cache-Control  defines how, and for how long the individual response can be cached by the browser and other intermediate caches. To learn more, see caching with Cache-Control.
  • ETag  provides a revalidation token that is automatically sent by the browser to check if the resource has changed since the last time it was requested. To learn more, see validating cached responses with ETags.

There are different ways to set these headers depending on your website platform. This guide to caching headers does a good job of explaining the different options available for the header. Ultimately, you will need to use a method that is correct for you specific platform.

9. Enable compression

This is a simple optimization, and comes enabled on most servers by default. By enabling gzip compression, assets served are run through a compression algorithm and decompressed by the client. The details of how to enable gzip compression are different for each platform, so find a guild that is specific to your technology to install or enable it.

10. Take advantage of server caching

As dynamic pages are built, there can be many database requests for each page. Consider the typical blog: the article content, related stories, and comments are each separate database queries. The comments will be updated frequently, but the article itself and the related stories will not change once published. We can save the results of these queries in memory or on the disk for future use (saving a expensive database query). Like browser caching, the implementation will vary from platform to platform, but the theory is the same. Each platform will have it’s community recommended caching solution, and honestly, the bang-to-buck ratio of creating your own is slim.

The logic of server caching is generally:

This pseudocode saves the cached page to the file system, but you could just as easily store data in memory or in an object-store. Google “caching in [your system]”, or read up on Varnishmemcached, redis for more information.

11. Optimize database queries

Expensive database queries can add hundreds of milliseconds to the lifecycle of a request. Optimizing queries is the process of:

  1. Identifying expensive database queries
  2. Refactoring
  3. Repeat

Identifying expensive queries

In general, look for queries that ask for more then they need. This can take the form of  SELECT * , when all you actually need is  SELECT first_name . Any query that returns more than you need, JOINs with more than is needed, or contains correlated subqueries should be suspect. Every database system is different, so there is silver-bullet when it comes to identifying bad queries.

Refactoring

Once bad queries are identified, use the database tools to understand the workings of the query. Many databases support an EXPLAIN  keyword that returns information about how a query is interpreted and executed by the database engine.

With the help of EXPLAIN, you can see where you should add indexes to tables so that the statement executes faster by using indexes to find rows. You can also use EXPLAIN to check whether the optimizer joins the tables in an optimal order. – MySQL Documentation

12. Separate static assets onto a different domain

When a request is made to a server, the client machine will send data that is relevant to the request to help the server complete it. This way, when a user requests a new page, data, such as an session cookie, is sent to the server. Without this interaction, dynamic and personalized user experiences would not be possible. However, there are many types of files where cookies are not relevant. This is the case with static assets (CSS, JavaScript, images, etc). To make static asset request smaller, serve assets from a static-only domain. You may have noticed while loading a YouTube video, a network request to *.ytimg.com. ytimg.com is YouTube’s cookie-less image domain. That way, when any image is requested by YouTube (like a thumbnail) it doesn’t have to send or receive cookies for every request.

You don’t necessarily need to host your static files on a different server, or even in a different application. You can configure most servers to use a separate path and domain for a folder using the rewrite rules system for your particular platform.

13. Use a Content Delivery Network

Content Delivery Networks, or CDN’s, store contents of your website on servers that are more physically close to the end users location. When a request for an asset is made, the CDN will look on it’s servers to see if it has a cached copy. If so, it will return that to the client — avoiding any traffic to your servers completely. If the file is not available, the CDN will request the asset from the server, store it, and then send it to the user. Not only does this decrease load on your server, but it speeds up the request by delivering the asset from a physically nearby server.

Akamai, and AWS both have very popular commercial offerings. However, you don’t have to be Facebook-scale to use a CDN. One company, CloudFlare, offers free CDN access for websites that server under a certain amount of traffic. Cloudflare’s product setup is relatively simple: you edit your DNS to point to their nameservers, and all traffic is routed through their network. Assets are cached as they are requested, and served to users from their servers around the world.

14. Flush partial responses

Some background: during a request lifecycle, there are three main functions that take the majority of the time to build a page:

  1. Network transmission time & latency
  2. Building the page on the server
  3. Rendering the page on the client

Often, as each part of the cycle is busy working, the other two are idle. We can perform this optimization to utilize the other two, and reduce the total time of the request.

Consider a request for “page-2”:

  1. the request is sent from the client
  2. network time
  3. the server receives and routes the request
  4. page renderer is found
  5. database queries are made
  6. page is built
  7. page is sent to the client
  8. network time
  9. page is received by the client
  10. client renders the page
  11. client makes requests for linked assets

Now, during step 4-6, the network and client are not utilized. Flushing a partial response means starting to send data over the network to the client before the entire response is ready. Segmenting a response in this way “pipelines” the process and decreases the time a client waits to receive the first response packet.

Each platform is different, but this pseudocode shows the general theory:

Flushing the output of the page in this way allows the client to receive the header before the entire page has finished building, and it may start rendering immediately.