{"id":2453,"date":"2020-12-30T13:24:42","date_gmt":"2020-12-30T13:24:42","guid":{"rendered":"https:\/\/ntsplhosting.com\/blog\/?p=2453"},"modified":"2021-12-21T05:26:59","modified_gmt":"2021-12-21T05:26:59","slug":"tips-for-writing-and-deploying-node-js-apps-on-cloud-functions","status":"publish","type":"post","link":"https:\/\/www.ntsplhosting.com\/blog\/tips-for-writing-and-deploying-node-js-apps-on-cloud-functions\/","title":{"rendered":"Tips for Writing and Deploying Node.js Apps on Cloud Functions"},"content":{"rendered":"<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>The DPE Client Library team at Google handles the release maintenance, and support of Google Cloud client libraries. Essentially, we act as the open-source maintainers of Google\u2019s 350+ repositories on GitHub. It\u2019s a big job&#8230;<\/p>\n<p>For this work to scale, it\u2019s been critical to automate various common tasks such as validating licenses, <a href=\"https:\/\/dev.to\/bcoe\/how-my-team-releases-libraries-23el\" target=\"_blank\" rel=\"noopener noreferrer\">managing releases<\/a>, and merging pull requests (PRs) once tests pass. To build our various automations, we decided to use the Node.js-based framework <a href=\"https:\/\/probot.github.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Probot<\/a>, which simplifies the process of writing web applications that listen for <a href=\"https:\/\/docs.github.com\/en\/free-pro-team@latest\/developers\/webhooks-and-events\/about-webhooks\" target=\"_blank\" rel=\"noopener noreferrer\">Webhooks<\/a> from the GitHub API. <i>[Editor\u2019s note: The team has deep expertise in Node.js. The co-author <a href=\"https:\/\/twitter.com\/benjamincoe?lang=en\" target=\"_blank\" rel=\"noopener noreferrer\">Benjamin Coe<\/a> was the third engineer at npm, Inc, and is currently a core collaborator on Node.js.]<\/i><\/p>\n<p>Along with the Probot framework, we decided to use <a href=\"https:\/\/cloud.google.com\/functions\">Cloud Functions<\/a> to deploy those automations, with the goal of reducing our operational overhead. We found that Cloud Functions are a great option for quickly and easily turning Node.js applications into hosted services:<\/p>\n<ul>\n<li>Cloud Functions can <a href=\"https:\/\/cloud.google.com\/functions\/docs\/max-instances\">scale automatically<\/a> as your user-base grows, without the need to provision and manage additional hardware.<\/li>\n<li>If you\u2019re familiar with creating an <a href=\"https:\/\/docs.npmjs.com\/creating-node-js-modules\" target=\"_blank\" rel=\"noopener noreferrer\">npm module<\/a>, it only takes a few additional steps to deploy it as a Cloud function; either with the <a href=\"https:\/\/cloud.google.com\/sdk\/gcloud\">gcloud CLI<\/a>, or from the Google Cloud Console (see: \u201c<a href=\"https:\/\/cloud.google.com\/functions\/docs\/first-nodejs\"><i>Your First Function: Node.js<\/i><\/a>\u201d).<\/li>\n<li>Cloud Functions integrate automatically with Google Cloud services, such as <a href=\"https:\/\/cloud.google.com\/functions\/docs\/monitoring#writing_and_viewing_logs\">Cloud Logging<\/a> and <a href=\"https:\/\/cloud.google.com\/functions\/docs\/monitoring\">Cloud Monitoring<\/a>.<\/li>\n<li>Cloud Functions can be triggered by events, from services such as <a href=\"https:\/\/firebase.google.com\/docs\/firestore\/extend-with-functions\" target=\"_blank\" rel=\"noopener noreferrer\">Firestore<\/a>, <a href=\"https:\/\/cloud.google.com\/functions\/docs\/calling\/pubsub\">PubSub<\/a>, <a href=\"https:\/\/cloud.google.com\/functions\/docs\/calling\/storage\">Cloud Storage<\/a>, and <a href=\"https:\/\/cloud.google.com\/tasks\/docs\/tutorial-gcf\">Cloud Tasks<\/a>.<\/li>\n<\/ul>\n<p>Jump forward two years, we now manage 16 automations that handle over 2 million requests from GitHub each day. And we continue to use Cloud Functions to deploy our automations. Contributors can concentrate on writing their automations, and it\u2019s easy for us to deploy them as functions in our production environment.<\/p>\n<p>Designing for serverless comes with its own set of challenges, around how you structure, deploy, and debug your applications, but we\u2019ve found the trade-offs work for us.Throughout the rest of this article, drawing on these first-hand experiences, we outline best practices for deploying Node.js applications on Cloud Functions, with an emphasis on the following goals:<\/p>\n<ul>\n<li>Performance &#8211; Writing functions that serve requests quickly, and minimize cold start times.<\/li>\n<li>Observability &#8211; Writing functions that are easy to debug when exceptions do occur.<\/li>\n<li>Leveraging the platform &#8211; Understanding the constraints that Cloud Functions and Google Cloud introduce to application development, e.g., understanding <a href=\"https:\/\/cloud.google.com\/compute\/docs\/regions-zones\">regions and zones<\/a>.<\/li>\n<\/ul>\n<p>With these concepts under your belt, you too can reap the operational benefits of running Node.js-based applications in a serverless environment, while avoiding potential pitfalls.<\/p>\n<h2>Best practices for structuring your application<\/h2>\n<p>In this section, we discuss attributes of the Node.js runtime that are important to keep in mind when writing code intended to deploy Cloud Functions. Of most concern:<\/p>\n<ul>\n<li>The average package on npm has a tree of 86 transitive dependencies (see: <a href=\"https:\/\/snyk.io\/blog\/how-much-do-we-really-know-about-how-packages-behave-on-the-npm-registry\/\" target=\"_blank\" rel=\"noopener noreferrer\">How much do we really know about how packages behave on the npm registry?<\/a>). It\u2019s important to consider the total size of your application\u2019s dependency tree.<\/li>\n<li>Node.js APIs are generally non-blocking by default, and these asynchronous operations can interact surprisingly with your function\u2019s request lifecycle. Avoid unintentionally creating asynchronous work in the background of your application.<\/li>\n<\/ul>\n<p>With that as the backdrop, here\u2019s our best advice for writing Node.js code that will run in Cloud Functions.<\/p>\n<h3>1. Choose your dependencies wisely<\/h3>\n<p>Disk operations in the <a href=\"https:\/\/github.com\/google\/gvisor\" target=\"_blank\" rel=\"noopener noreferrer\">gVisor sandbox<\/a>, which Cloud Functions run within, will likely be slower than on your laptop\u2019s typical operating system (that\u2019s because gVisor provides an <a href=\"https:\/\/gvisor.dev\/docs\/architecture_guide\/security\/\" target=\"_blank\" rel=\"noopener noreferrer\">extra layer of security<\/a> on top of the operating system, at the cost of some additional latency). As such, minimizing your npm dependency tree reduces the reads necessary to bootstrap your application, improving cold start performance.<\/p>\n<p>You can run the command <b><i>npm ls &#8211;production<\/i><\/b> to get an idea of how many dependencies your application has. Then, you can use the online tool <a href=\"https:\/\/bundlephobia.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">bundlephobia.com<\/a> to analyze individual dependencies, including their total byte size. You should remove any unused dependencies from your application, and favor smaller dependencies.<\/p>\n<p>Equally important is being selective about the files you import from your dependencies. Take the library <a href=\"https:\/\/www.npmjs.com\/package\/googleapis\" target=\"_blank\" rel=\"noopener noreferrer\">googleapis<\/a> on npm: running <i><b>require(&#8216;googleapis&#8217;)<\/b><\/i> pulls in the <b>entire index<\/b> of <a href=\"https:\/\/github.com\/googleapis\/google-api-nodejs-client\/tree\/master\/src\/apis\" target=\"_blank\" rel=\"noopener noreferrer\">Google APIs<\/a>, resulting in hundreds of disk read operations. Instead you can pull in just the Google APIs you\u2019re interacting with, like so:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>It\u2019s common for libraries to allow you to pull in the methods you use selectively\u2014be sure to check if your dependencies have similar functionality before pulling in the whole index.<\/p>\n<h3>2. Use \u2018require-so-slow\u2019 to analyze require-time performance<\/h3>\n<p>A great tool for analyzing the require-time performance of your application is <a href=\"https:\/\/www.npmjs.com\/package\/require-so-slow\" target=\"_blank\" rel=\"noopener noreferrer\"><b><i>require-so-slow<\/i><\/b><\/a>. This tool allows you to output a timeline of your application\u2019s require statements, which can be loaded in <a href=\"https:\/\/chromedevtools.github.io\/timeline-viewer\/\" target=\"_blank\" rel=\"noopener noreferrer\">DevTools Timeline Viewer<\/a>. As an example, let\u2019s comparet loading the entire catalog of googleapis, versus a single required API (in this case, the SQL API):<\/p>\n<p><b>Timeline of require(&#8216;googleapis&#8217;):<\/b><\/p>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid\">\n<figure class=\"article-image--large h-c-grid__col h-c-grid__col--6 h-c-grid__col--offset-3 \"><img src=\"https:\/\/storage.googleapis.com\/gweb-cloudblog-publish\/images\/Timeline_of_requiregoogle.0999032919990659.max-1000x1000.jpg\" alt=\"Timeline of require('googleapis').jpg\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>The graphic above demonstrates the total time to load the googleapis dependency. Cold start times will include the entire 3s span of the chart.<\/p>\n<p><b>Timeline of require(&#8216;googleapis\/build\/src\/apis\/sql&#8217;):<\/b><\/p>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid\">\n<figure class=\"article-image--large h-c-grid__col h-c-grid__col--6 h-c-grid__col--offset-3 \"><img src=\"https:\/\/storage.googleapis.com\/gweb-cloudblog-publish\/images\/Timeline_of_requiregoogleapis_build_src_ap.max-1000x1000.jpg\" alt=\"Timeline of require('googleapis_build_src_apis_sql').jpg\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>The graphic above demonstrates the total time to load just the sql submodule. The cold start time is a more respectable 195ms.<\/p>\n<p>In short, requiring the SQL API directly is over 10 times faster than loading the full googleapis index!<\/p>\n<h3>3. Understand the request lifecycle, and avoid its pitfalls<\/h3>\n<p>The <a href=\"https:\/\/cloud.google.com\/functions\/docs\/concepts\/exec\">Cloud Functions documentation<\/a> issues the following warning about execution timelines: <i>A function has access to the resources requested (CPU and memory) only for the duration of function execution. Code run outside of the execution period is not guaranteed to execute, and it can be stopped at any time.\u00a0<\/i><\/p>\n<p>This problem is easy to bump into with Node.js, as many of its APIs are asynchronous by default. It&#8217;s important when structuring your application that <b><i>res.send()<\/i><\/b> is called only after all asynchronous work has completed.<\/p>\n<p>Here\u2019s an example of a function that would have its resources revoked unexpectedly:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>In the example above, the promise created by <b><i>set()<\/i><\/b> will still be running when <b><i>res.send()<\/i><\/b> is called. It should be rewritten like this:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>This code will no longer run outside the execution period because we\u2019ve <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Operators\/await\" target=\"_blank\" rel=\"noopener noreferrer\">awaited<\/a> <b>set()<\/b> before calling <b>res.send()<\/b>.<\/p>\n<p>A good way to debug this category of bug is with well-placed logging: Add debug lines following critical asynchronous steps in your application. Include timing information in these logs relative to when your function begins a request. Using <a href=\"https:\/\/cloud.google.com\/logging\/docs\/view\/logs-viewer-interface\">Logs Explorer<\/a>, you can then examine a single request and ensure that the output matches your expectation; missing log entries, or entries coming significantly later <i>(leaking into subsequent requests)<\/i> are indicative of an unhandled promise.<\/p>\n<p>During cold starts, code in the global scope <i>(at the top of your source file, outside of the handler function)<\/i> will be executed outside of the context of normal function execution. You should avoid asynchronous work entirely in the global scope, e.g, <i><b>fs.read()<\/b><\/i>, as it will always run outside of the execution period.<\/p>\n<h3>4. Understand and use the global scope effectively<\/h3>\n<p>It\u2019s okay to have \u2018expensive\u2019 synchronous operations, such as <b><i>require<\/i><\/b> statements, in the global scope. When benchmarking cold start times, we found that moving require statements to the global scope <i>(rather than lazy-loading within your function)<\/i> lead to a 500ms to 1s improvement in cold start times. This can be attributed to the fact that Cloud Functions are allocated compute resources while bootstrapping.<\/p>\n<p>Also consider moving other expensive one-time synchronous operations, e.g., <b><i>fs.readFileSync<\/i><\/b>, into the global scope. The important thing to avoid asynchronous operations, as they will be performed outside of the execution period.<\/p>\n<p>Cloud functions recycle the execution environment; this means that you can use the global scope to cache expensive one-time operations that remain constant during function invocations:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>It\u2019s critical that we await asynchronous operations before sending a response, but it\u2019s okay to cache their response in the global scope.<\/p>\n<h3>5. Move expensive background operations into Cloud Tasks<\/h3>\n<p>A good way to improve the throughput of your Cloud function, i.e., reduce overall latency during cold starts and minimize the necessary instances during traffic spikes, is to move work outside of the request handler. Take the following application that performs several expensive database operations:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>The response sent to the user does not require any information returned by our database updates. Rather than waiting for these operations to complete, we could instead use <a href=\"https:\/\/cloud.google.com\/tasks\">Cloud Tasks<\/a> to schedule this operation in another Cloud function, and respond to the user immediately. This has the added benefit that <a href=\"https:\/\/cloud.google.com\/tasks\/docs\/configuring-queues\">Cloud Task queues<\/a> support retry attempts, shielding your application from intermittent errors, e.g., a one-off failure writing to the database.<\/p>\n<p>Here\u2019s our prior example split into a user-facing function and a background function:<\/p>\n<p><b>User-facing function:<\/b><\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p><b>Background function:<\/b><\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<h2>Deploying your application<\/h2>\n<p>The next section of this article discusses settings, such as <a href=\"https:\/\/cloud.google.com\/functions\/docs\/concepts\/exec#memory\">memory<\/a>, and <a href=\"https:\/\/cloud.google.com\/functions\/docs\/locations\">location,<\/a>that you should take into account when deploying your application.<\/p>\n<h3>1. Consider memory\u2019s relationship to performance<\/h3>\n<p>Allocating more memory to your functions will also result in the allocation of more CPU (see: \u2018\u2019<a href=\"https:\/\/cloud.google.com\/functions\/pricing\">Compute Time<\/a>\u201d). For CPU-bound applications, e.g., applications that require a significant number of dependencies at start up, or that are performing computationally expensive operations (see: \u201c<a href=\"https:\/\/cloud.google.com\/functions\/docs\/tutorials\/imagemagick#functions-prepare-environment-nodejs\">ImageMagick Tutorial<\/a>\u201d), you should experiment with various instance sizes as a first step towards improving request and cold-start performance.<\/p>\n<p>You should also be mindful of whether your function has a reasonable amount of available memory when running; applications that run too close to their memory limit will occasionally crash with out-of-memory errors, and may have unpredictable performance in general.<\/p>\n<p>You can use the <a href=\"https:\/\/cloud.google.com\/monitoring\/charts\/metrics-explorer\">Cloud Monitoring Metrics Explorer<\/a> to view the memory usage of your Cloud functions. In practice, my team found that 128Mb functions did not provide enough memory for our Node.js applications, which average 136Mb. Consequently, we moved to the 256Mb setting for our functions and stopped seeing memory issues<\/p>\n<h3>2. Location, location, location<\/h3>\n<p>The speed of light dictates that the best case for TCP\/IP traffic will be ~2ms latency per 100 miles<sup>1<\/sup>. This means that a request between New York City and London has a minimum of 50ms of latency. You should take these constraints into account when designing your application.<\/p>\n<p>If your Cloud functions are interacting with other Google Cloud services, deploy your functions in the same region as these other services. This will ensure a high-bandwidth, low-latency network connection between your Cloud function and these services (see: \u201c<a href=\"https:\/\/cloud.google.com\/compute\/docs\/regions-zones\"><i>Regions and Zones<\/i><\/a>\u201d).<\/p>\n<p>&nbsp;<\/p>\n<p>Make sure you deploy your Cloud functions close to your users. If people using your application are in California, deploy in <i>us-west<\/i> rather than <i>us-east<\/i>; this alone can save 70ms of latency.<\/p>\n<h3>Debugging and analyzing your application<\/h3>\n<p>The next section of this article provides some recommendations for effectively debugging your application once it\u2019s deployed.<\/p>\n<h3>1. Add debug logging to your application:<\/h3>\n<p>In a Cloud Functions environment, avoid using client libraries such as <a href=\"https:\/\/www.npmjs.com\/package\/@google-cloud\/logging\" target=\"_blank\" rel=\"noopener noreferrer\"><i>@google-cloud\/logging<\/i><\/a>, <i>and <a href=\"https:\/\/www.npmjs.com\/package\/@google-cloud\/monitoring\" target=\"_blank\" rel=\"noopener noreferrer\">@google-cloud\/monitoring<\/a><\/i> for telemetry. These libraries buffer writes to the backend API, which can lead to work remaining in the background after calling <b><i>res.send()<\/i><\/b> outside of your application\u2019s execution period.<\/p>\n<p>Cloud functions are instrumented with monitoring and logging by default, which you can access with <a href=\"https:\/\/cloud.google.com\/monitoring\/charts\/metrics-explorer\">Metrics Explorer<\/a> and <a href=\"https:\/\/cloud.google.com\/logging\/docs\/view\/logs-viewer-preview\">Logs Explorer<\/a>:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-image_full_width\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid\">\n<figure class=\"article-image--large h-c-grid__col h-c-grid__col--6 h-c-grid__col--offset-3 \"><img src=\"https:\/\/storage.googleapis.com\/gweb-cloudblog-publish\/images\/image_3_wI6Lfjn.max-1000x1000.jpg\" alt=\"image 3.jpg\" \/><\/figure>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>For <a href=\"https:\/\/cloud.google.com\/functions\/docs\/monitoring\/logging\">structured logging<\/a>, you can simply use <b><i>JSON.stringify()<\/i><\/b> which Cloud Logging interprets as structured logs:<\/p>\n<\/div>\n<\/div>\n<div class=\"block-code\">\n<div class=\"article-module h-c-page\">\n<div class=\"h-c-grid uni-paragraph-wrap\">\n<div class=\"uni-paragraph h-c-grid__col h-c-grid__col--8 h-c-grid__col-m--6 h-c-grid__col-l--6 h-c-grid__col--offset-2 h-c-grid__col-m--offset-3 h-c-grid__col-l--offset-3\">\n<pre><code><\/code><\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"block-paragraph\">\n<div class=\"rich-text\">\n<p>The <b><i>entry<\/i><\/b> payload follows the structure <a href=\"https:\/\/cloud.google.com\/logging\/docs\/structured-logging\">described here<\/a>. Note the <b><i>timingDelta<\/i><\/b>, as discussed in <i>\u201cUnderstand the request lifecycle\u201d<\/i>\u2014this information can help you debug whether you have any unhandled promises hanging around after <b><i>res.send()<\/i><\/b>.<\/p>\n<p>&nbsp;<\/p>\n<p>There are CPU and network costs associated with logging, so be mindful about the size of entries that you log. For example, avoid logging huge JSON payloads when you could instead log a couple of actionable fields. Consider using an environment variable to vary logging levels; default to relatively terse actional logs, with the ability to turn on verbose logging for portions of your application using <a href=\"https:\/\/nodejs.org\/api\/util.html#util_util_debuglog_section_callback\" target=\"_blank\" rel=\"noopener noreferrer\">util.debuglog<\/a>.<\/p>\n<h2>Our takeaways from using Cloud Functions<\/h2>\n<p>Cloud Functions work wonderfully for many types of applications:<\/p>\n<ul>\n<li><a href=\"https:\/\/cloud.google.com\/scheduler\">Cloud Scheduler<\/a> tasks: We have a Cloud function that checks for releases stuck in a failed state every 30 minutes.<\/li>\n<li><a href=\"https:\/\/cloud.google.com\/functions\/docs\/calling\/pubsub\">Pub\/Sub consumers<\/a>: One of our functions parses XML unit test results from a queue, and opens issues on GitHub for flaky tests.<\/li>\n<li>HTTP APIs: We use Cloud Functions to accept Webhooks from the GitHub API; for us it\u2019s okay if requests occasionally take a few extra seconds due to cold starts.<\/li>\n<\/ul>\n<p>As it stands today, though, it\u2019s not possible to completely eliminate cold starts with Cloud Functions: <i>instances are occasionally restarted, bursts of traffic lead to new instances being started.<\/i> As such, Cloud Functions still isn\u2019t a great fit for applications that can\u2019t shoulder the additional seconds that cold starts occasionally add. As an example, blocking a user-facing UI update on the response from a Cloud Function is not a good idea.<\/p>\n<p>We want Cloud Functions to work for these types of time-sensitive applications, and have features in the works to make this a reality:<\/p>\n<ul>\n<li>Allowing a minimum number of instances to be specified; this will allow you to avoid cold starts for typical traffic patterns (with new instances only being allocated when requests are made above the threshold of minimum instances).<\/li>\n<li>Performance improvements to disk operations in <a href=\"https:\/\/github.com\/google\/gvisor\" target=\"_blank\" rel=\"noopener noreferrer\">gVisor<\/a>, the sandbox that Cloud Functions run within: A percentage of cold-start time is spent loading resources into memory from disk, which these changes will speed up.<\/li>\n<li>Publishing individual APIs from <i>googleapis<\/i> on npm. This will make it possible for people to write Cloud functions that interact with popular Google APIs, without having to pull in the entire <i>googleapis<\/i> dependency.<\/li>\n<\/ul>\n<p>With all that said, it\u2019s been a blast developing our automation framework on Cloud Functions, which, if you accept the constraints, and follow the practices outlined in this article is a great option for deploying small Node.js applications;.<\/p>\n<p>Have feedback on this article? Have an idea as to how we can continue to improve Cloud Functions for your use case? Don\u2019t hesitate to open an issue on our <a href=\"https:\/\/issuetracker.google.com\/savedsearches\/559729\" target=\"_blank\" rel=\"noopener noreferrer\">public issue tracker<\/a>.<\/p>\n<hr \/>\n<p><i><sup>1.\u00a0<a href=\"https:\/\/hpbn.co\/primer-on-latency-and-bandwidth\/#speed-of-light-and-propagation-latency\" target=\"_blank\" rel=\"noopener noreferrer\">High Performance Browser Networking<\/a><\/sup><\/i><\/p>\n<\/div>\n<\/div>\n<div class=\"block-related_article_tout\">\n<div class=\"uni-related-article-tout h-c-page\">\n<section class=\"h-c-grid\">\n<div class=\"uni-related-article-tout__inner-wrapper\">\n<p class=\"uni-related-article-tout__eyebrow h-c-eyebrow\">Related Article<\/p>\n<div class=\"uni-related-article-tout__content-wrapper\">\n<div class=\"uni-related-article-tout__image-wrapper\">\n<div class=\"uni-related-article-tout__image\"><\/div>\n<\/div>\n<div class=\"uni-related-article-tout__content\">\n<h4 class=\"uni-related-article-tout__header h-has-bottom-margin\">New in Cloud Functions: languages, availability, portability, and more<\/h4>\n<p class=\"uni-related-article-tout__body\">Cloud Functions includes a wealth of new capabilities that make it a robust platform on which to build your applications<\/p>\n<div class=\"cta module-cta h-c-copy uni-related-article-tout__cta muted\"><span class=\"nowrap\">Read Article<\/span><\/div>\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<\/section>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The DPE Client Library team at Google handles the release maintenance, and support of Google Cloud client libraries. Essentially, we act as the open-source maintainers of Google\u2019s 350+ repositories on GitHub. It\u2019s a big job&#8230; For this work to scale, it\u2019s been critical to automate various common tasks such as validating licenses, managing releases, and [&hellip;]<\/p>\n","protected":false},"author":53,"featured_media":2461,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[445],"tags":[447,446],"_links":{"self":[{"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/posts\/2453"}],"collection":[{"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/users\/53"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/comments?post=2453"}],"version-history":[{"count":5,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/posts\/2453\/revisions"}],"predecessor-version":[{"id":37833,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/posts\/2453\/revisions\/37833"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/media\/2461"}],"wp:attachment":[{"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/media?parent=2453"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/categories?post=2453"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ntsplhosting.com\/blog\/wp-json\/wp\/v2\/tags?post=2453"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}