Deploy a Next.js application at 3 different budgets | by josephmark

Two women in front of a computer
Josephmark Studio

To Josephmarkwe work in sectors ranging from fintech at sextech, and part of that job includes designing and building great looking web applications for brand new businesses. While we really like the MVP (minimum viable product) approach to launching and testing a young product to the world, we are equally comfortable working with established brands with vendor partnerships and complex product requirements. availability and data sovereignty.

Enter Sam Haakman, Bass Drummer & Drummer, Full Stack Developer at JM. Although he was a full-time engineer for six years, he accepted his first paid development gig in 2010. “Radical honesty” has become the mantra that runs through his work and he certainly lives up to it, especially with regard to improving the accessibility, speed and flexibility of our projects. We were able to take advantage of Sam’s valuable time to give you an overview of our engineering frameworks.

In this article, Sam is going to explain how we approach the deployment of our favorite front-end web framework, Next.js, at three levels of complexity and cost. Our solutions start at less than $1/month and scale to over $100/month.

For our motion design studio, Breeder, we needed a quick and easy way to put their portfolio online. The site has a small footprint and the content doesn’t change often, so choosing a static deployment was a simple and very cost effective solution.

Static Deployment — breeding studio

Breeder data starts in Prismic, using their community plan ($0/month). We have three types of data including home page layout, case studies and career listings.

Then we use Next.js’ getStaticProps function to retrieve data from Prismic and populate our UI. next/dynamic allows us to create flexible layouts using Prismic “slices” without increasing the package size.

We use Bitbucket Pipelines (50 min/mo $0) to compile and deploy the codebase to AWS S3 ($0.01/mo) as well as request a cache purge from Cloudflare ($0/mo). The videos are simply hosted on Vimeo.

pipelines:
branches:
main:
- step:
name: Build & Export site to HTML
caches:
- node
- next
image: node:17-alpine
script:
- npm i --production
- npm run build
- npm run export
artifacts:
- out/**
- step:
name: Push Site to S3
deployment: production
image: atlassian/pipelines-awscli:latest
script:
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"
- aws s3 sync --delete --exclude "*.html" --cache-control "public,max-age=31536000,immutable" out s3://${AWS_S3_BUCKET}
- aws s3 sync --delete --exclude "*" --include "*.html" out s3://${AWS_S3_BUCKET}
- step:
name: Purge Cloudflare
image: curlimages/curl
script:
- >
curl -X POST ""
-H "Authorization: Bearer ${CLOUDFLARE_KEY}"
--data '{"purge_everything":true}'
definitions:
caches:
next: .next

Finally, we implemented a Lambda function (

const https = require("https")exports.handler = async (event) => {
const secret = JSON.parse(event?.body)?.secret
if (!secret) {
return {
statusCode: 401,
body: JSON.stringify({ message: "No auth found", request: event })
}
}
const data = await new Promise((resolve) => {
const req = https.request(
{
hostname: "api.bitbucket.org",
port: 443,
path: `/2.0/repositories/${process.env.BITBUCKET_REPOSITORY}/pipelines/`,
method: "POST",
headers: {
Authorization: `Basic ${secret}`,
"Content-Type": "application/json"
},
},
(resp) => {
let d = ''
resp.on("data", chunk => {
d += chunk
})
resp.on("end", () => {
resolve(d)
})
}
)
req.write(JSON.stringify({
target: {
ref_type: "branch",
type: "pipeline_ref_target",
ref_name: "main",
},
})
)
req.end()
})

const response = {
statusCode: data?.code || 200,
body: JSON.stringify({"message": "Yeah good", data, input: event}),
};
return response;
};

  • Prismic $0
  • AWSS3 $0.01
  • AWS Lambda $0
  • Bitbucket pipelines $0
  • Cloudy $0

We recently created the Digital Seller Assistant web application for EXCHANGE – a charming local circular fashion retailer focused on high quality pre-loved pieces. Being an established brick-and-mortar company with early-stage digital capability, we didn’t want to commit them to a complex, high-maintenance bespoke hosted solution. Next.js’ image manipulation and serverless features aren’t available with a static export, so we’ve taken a middle route with managed hosting.

Managed Hosting — EXCHANGE

SWOP uses the new FL0 alpha API tool ($0 up to 1 million requests – leave a comment if you want to hear our thoughts on this new technology!). Clothing brands, categories and prices all come from FL0. We use SendGrid to notify stores and sellers when they book a courier or in-store notification, also triggered via FL0.

As before, we pre-load some data with getStaticProps. The layout here is fixed, although we still use next/dynamic to defer loading the heaviest components inside the overlay.

For this project, we are hosting on Vercel ($20/month/user) although we are also partial to Netlify which has a similar feature set and pricing.

We decided to deploy through Pipelines to avoid adding our entire Bitbucket organization to Vercel’s billing.

pipelines:
branches:
main:
- step:
image: node:17-alpine
name: Prod Deploy
deployment: production
caches:
- node
script:
- npx vercel --prod -b BITBUCKET_COMMIT=$BITBUCKET_COMMIT -t $VERCEL_TOKEN
  • FL0 $0
  • Vercel $20
  • Bitbucket pipelines $0
  • Sending Grid $0

Finally, we have our Big Bertha – the full containerized deployment. We remove this strap when we need maximum control over our stack and over features such as data center or availability zone redundancy, green-blue deployments, and localized data sovereignty and security. As a bonus, we can also run whatever backend we like without significantly increasing the cost. built like josephmark.studio and thepeoplesgrid.com are where we adapt this approach. $150/month is on the low end of the cost of a deployment like this, and ultimately the sky is the limit depending on project requirements. Features like data replication, machine learning compute, and high capacity databases will quickly push those numbers into the 4-6 digit mark.

Containerized Hosting — Josephmark

We’re not precious about CMS selection, but for the sake of discussion, josephmark.studio uses Strapi for the backend. We use the hotspot feature wisely to allow our team to create composable layouts that follow a predefined style guide. Admins have access to creating beautiful pages without the designers losing their passion for someone changing the font to Papyrus.

getStaticProps continues to be the star of the show for data recovery, but we also use Next’s preview feature alongside custom Strapi code to allow content writers to preview case study projects at as they are written.

We run the front-end and back-end containers in an AWS ECS cluster and use Fargate serverless compute to back this application up. Front-end and back-end collocated makes more efficient use of available resources, and they’re always scalable if things start to slow down. The database communication happens in a private subnet, so it is well protected against attacks.

To keep our compute instances super lightweight, we’re also moving the image optimization pipeline to a serverless Lambda function. This allows image optimizations to easily persist across deployments, and optimized images are cached by Cloudfront for lightning-fast global availability.

We still rely on Cloudflare for its hassle-free security and performance features and run our CI/CD on Bitbucket Pipelines.

No configuration here – that would be giving away the secret recipe. 😉

  • AWS ECS $16
  • AWS Fargate Compute $58
  • AWS EC2 load balancer $48
  • AWS RDS $25
  • AWS Lambda $0
  • AWSS3 $0.63
  • AWS Cloud Front $0
  • Cloudy $0
  • Bitbucket pipelines $0


Source link

Comments are closed.