๐Ÿš€ Frontend Performance Optimization

โœ… Cookies vs Local Storage vs Session Storage โ€” Deep Dive

๐Ÿช Cookies Storage

Cookies have a strict limit in terms of storage โ€” around 4KB, which makes them suitable for small but critical pieces of data. One of the most powerful aspects of cookies is that they operate on both the server and client side. Every time you send an HTTP request (like visiting a route or fetching data), the browser includes the cookies automatically in the request headers. This behavior makes them perfect for authentication flows, particularly when you're working over HTTPS and using security flags like HttpOnly, Secure, and SameSite. This way, the token stored in the cookie is not accessible from JavaScript and is protected from XSS attacks.

Cookies are ideal for storing things like tokens, session identifiers, or anything that needs to be shared between the client and server seamlessly, without manually attaching it to every request. Because the browser handles this, they are extremely effective when you want automatic synchronization between client-server interactions.

Key traits:

๐Ÿ—‚๏ธ Local Storage

LocalStorage, on the other hand, has a much higher storage limit (usually around 5โ€“10MB, depending on the browser). Unlike cookies, it is purely client-side and not sent with HTTP requests. It persists even after closing the browser, so it is ideal for storing application state, user preferences, or temporary data that you want to keep across sessions.

For instance, imagine your app has a user onboarding flow and you want to store which step the user is on. If they close the browser and return later, LocalStorage allows you to retrieve that information and resume exactly where they left off.

Key traits:

๐Ÿ•’ Session Storage

SessionStorage is very similar to LocalStorage in terms of API (both use setItem, getItem, etc.), but it behaves differently: it is cleared when the tab or browser window is closed. It's scoped per tab, so if you open the same page in two tabs, each one will have a separate session storage.

SessionStorage is not used as much, but it's perfect for use cases like checkout pages or form inputs, where you don't want the user to lose progress if the page reloads or something fails, but you also don't want to persist that data forever. Since it's automatically cleared, you don't need to implement cleanup logic.

Key traits:

๐Ÿง  If you are setting up a new frontend application, what are some optimizations you would put in place to make it more performant?

๐Ÿ‘ทโ€โ™€๏ธ Let's assume this is a modern frontend React application using client-side rendering and something like Webpack or Vite as a module bundler.

1. Polyfilling the Code (Backward Compatibility)

In any frontend project, you'll likely want to use modern JavaScript features (like async/await, spread operator, optional chaining, etc). But not all browsers support these features โ€” especially older ones like Internet Explorer 11. So you need to polyfill your code.

๐Ÿ’ก What is a polyfill? It's a piece of code (usually a function or method) that provides modern functionality on older browsers that do not natively support it.

Here's how the process works:

Example: If the browser doesn't support Promise, you might import a Promise polyfill like core-js.

๐Ÿ“ฆ The tradeoff: yes, this increases the bundle size, but ensures the app runs on more devices, increasing accessibility.

import 'core-js/features/promise';
import 'whatwg-fetch';

2. Bundle Compression (Gzip/Brotli)

Once your JavaScript is bundled together, you want to reduce the file size as much as possible before it's sent over the network. Instead of sending a raw .js file, you send a compressed gzip or brotli file.

This can reduce file size by up to 70%, which has a huge impact on load time, especially for mobile users or slow networks.

You use content negotiation (via the HTTP headers) to tell the browser: "hey, this is a gzip file", and then it automatically decompresses it and executes it like normal.

In Webpack:

const CompressionPlugin = require('compression-webpack-plugin');

module.exports = {
  plugins: [
    new CompressionPlugin({
      algorithm: 'gzip',
    }),
  ],
};

3. Uglification and Minification

This is a classic optimization. You take human-readable JS and remove all unnecessary characters:

const userLoggedIn = true;

gets transformed into:

let a=!0;

This process makes your file smaller, but unreadable โ€” this is why you also generate source maps, so you can still debug your original code when an error happens in production.

Without source maps: You get an error on line 1 of main.min.js โ€” useless.

With source maps: You know the original line of code in auth.js:45 that caused it.

4. Code Splitting (Load What You Need)

Instead of shipping all your JavaScript upfront, split your code into logical chunks and load them on-demand.

Example:

const CheckoutPage = React.lazy(() => import('./Checkout'));

This is useful because on initial page load, you might not need the checkout page at all. So don't load it. Instead, lazy-load it when the user navigates to it.

Webpack/Vite will handle the chunking for you.

5. Tree Shaking (Eliminate Dead Code)

If you're using ES6 static imports, Webpack or Vite can analyze your code and remove unused parts of a library โ€” known as tree shaking.

Example:

import { Button } from 'my-ui-library';

If you don't use Modal, Tooltip, etc. โ€” those won't be bundled so performance is as good as possible.

๐Ÿ’ก Does not work well with require() or dynamic imports.

6. Dependency Graph

Webpack builds a dependency graph by analyzing your import statements starting from the entry file (usually index.js). It forms a tree structure of modules and figures out what is needed, what can be bundled together, and what can be eliminated.

This ensures optimal bundling and helps during code splitting and tree shaking.

CSS-in-JS

CSS handles styles, JS handles interactivity.

Use cases:

Disadvantages:

7. Image & Asset Optimization

Frontend apps with very huge images, how would we optimize for performance?

Performance challenges in frontend

How do you manage code quality in a large scale frontend application? What tools and practices do you use?

๐Ÿงน for code quality, i would probably start with a linter, so you want to basically catch small issues and make sure everybody writes the same code.

๐ŸŽจ you're gonna have prettier, maybe eslint set up, and if you're using typescript, have the tslint set up.

๐Ÿง  this takes off a lot of work and communication because everybody writes code in the same style.

๐Ÿงช after that, you do want to have a layer of unit tests, and ideally some E2E tests for sure.

๐Ÿ” and finally, i would have something like a dependency scan.

In the linter, have something that scans for a11y, which stands for accessibility.

๐Ÿงฐ so now we're taking care of code quality, style, accessibility, testing, and dependencies (since node modules especially can be vulnerable to different attacks).

๐Ÿ“Š finally, have something like lighthouse or sentry in your pipeline โ€” ๐Ÿ“ˆ they tell you how your core web vitals change and how web performance changes over time. โš ๏ธ so if we make any mistake, like adding a big image or extra fonts, we immediately see the effect and fix it early on.

โ“ what is an xss attack and how do you make sure that frontend apps are not vulnerable to those attacks?

๐Ÿ’ฅ xss = cross-site scripting.

๐Ÿ‘พ the attacker persists some javascript code in our database.

๐ŸŒ then users, because they fetch from our database, end up running that code in their browser.

โ“ what are micro-frontends? when would you use frontend architecture?

๐Ÿงฉ in a frontend, we usually have different components โ€” header, body of the page, checkout info, etc.

๐Ÿ‘ฅ this works pretty fine when we're in small teams, but as we scale, it becomes really hard to contribute to a single frontend monolith.

โš™๏ธ so what we can do is split those individual parts into different applications.

๐Ÿงฑ and then we have a shell โ€” kind of like a container โ€” that puts them all together.

๐Ÿ” this shell can be responsible for things like authentication, and shared state.

๐Ÿš€ now we can deploy the different apps independently, so it allows us to separate development teams, and make development much faster and more modular.

โš ๏ธ BUT you pay the price of complexity โ€” you need more complex tooling to make something like this happen (routing, shared packages, auth sync, etc).

๐Ÿง  when does it make sense?

๐Ÿงฑ when we already have a big monolithic app, and we're thinking of breaking it into micro-frontends.

๐Ÿ‘ฅ micro-frontends are mostly good when splitting organisations. you need to split people into teams that can work independently.

๐Ÿ”— but because we're distributing our system, you always pay the price for that โ€” especially things like:

โœ… so when we need technology to enable parallel work without cross-dependencies, micro-frontends can help.

๐Ÿšซ but if you're building smaller websites or simple apps, there's no real reason to overcomplicate โ€” just stick with a regular monolithic frontend setup.