Chapter 13: Performance Optimization
Authored by syscook.dev
What is Performance Optimization in TypeScript?
Performance optimization in TypeScript involves techniques and strategies to improve the runtime performance, bundle size, and overall efficiency of TypeScript applications. This includes optimizing code execution, reducing bundle size, and improving loading times.
Key Concepts:
- Bundle Size Optimization: Reducing the size of compiled JavaScript
- Tree Shaking: Removing unused code from bundles
- Code Splitting: Breaking code into smaller, loadable chunks
- Lazy Loading: Loading code only when needed
- Memory Management: Efficient memory usage and garbage collection
- Runtime Performance: Optimizing code execution speed
Why Optimize Performance?
1. Faster Loading Times
Optimized bundles load faster, improving user experience.
// Without optimization - large bundle
import * as _ from 'lodash'; // Imports entire library
import { Component1, Component2, Component3 } from './components'; // All components loaded
// With optimization - smaller bundle
import { debounce } from 'lodash-es'; // Only imports needed function
import { Component1 } from './components/Component1'; // Only loads needed component
2. Better Runtime Performance
Optimized code runs faster and uses less memory.
// Without optimization - inefficient code
function processData(data: any[]) {
return data
.filter(item => item.active)
.map(item => ({ ...item, processed: true }))
.filter(item => item.value > 0)
.map(item => item.value * 2);
}
// With optimization - efficient code
function processData(data: any[]) {
const result: number[] = [];
for (let i = 0; i < data.length; i++) {
const item = data[i];
if (item.active && item.value > 0) {
result.push(item.value * 2);
}
}
return result;
}
How to Optimize Performance?
1. Bundle Size Optimization
Tree Shaking
// Good: Named imports for tree shaking
import { debounce, throttle } from 'lodash-es';
// Avoid: Default imports that prevent tree shaking
import _ from 'lodash';
// Good: Specific utility imports
import { debounce } from 'lodash/debounce';
// Avoid: Importing entire modules
import * as utils from './utils';
Code Splitting
// Dynamic imports for code splitting
const LazyComponent = React.lazy(() => import('./LazyComponent'));
// Route-based code splitting
const routes = [
{
path: '/dashboard',
component: React.lazy(() => import('./Dashboard'))
},
{
path: '/profile',
component: React.lazy(() => import('./Profile'))
}
];
// Conditional loading
async function loadFeature(featureName: string) {
switch (featureName) {
case 'analytics':
return import('./features/analytics');
case 'reporting':
return import('./features/reporting');
default:
throw new Error('Unknown feature');
}
}
2. Runtime Performance Optimization
Efficient Data Structures
// Use appropriate data structures
class UserCache {
private users = new Map<number, User>(); // O(1) lookup
private userIndex = new Map<string, number>(); // O(1) email lookup
addUser(user: User): void {
this.users.set(user.id, user);
this.userIndex.set(user.email, user.id);
}
getUserById(id: number): User | undefined {
return this.users.get(id);
}
getUserByEmail(email: string): User | undefined {
const id = this.userIndex.get(email);
return id ? this.users.get(id) : undefined;
}
}
// Avoid inefficient operations
class InefficientUserCache {
private users: User[] = []; // O(n) lookup
addUser(user: User): void {
this.users.push(user);
}
getUserById(id: number): User | undefined {
return this.users.find(user => user.id === id); // O(n) operation
}
}
Memoization
// Memoization for expensive calculations
function expensiveCalculation(n: number): number {
console.log(`Calculating for ${n}`);
return n * n * n;
}
// Simple memoization
const memoizedCalculation = (() => {
const cache = new Map<number, number>();
return (n: number): number => {
if (cache.has(n)) {
return cache.get(n)!;
}
const result = expensiveCalculation(n);
cache.set(n, result);
return result;
};
})();
// Usage
console.log(memoizedCalculation(5)); // Calculates
console.log(memoizedCalculation(5)); // Uses cache
console.log(memoizedCalculation(3)); // Calculates
console.log(memoizedCalculation(5)); // Uses cache
Debouncing and Throttling
// Debouncing for search input
function createDebouncedSearch<T>(
searchFn: (query: string) => Promise<T[]>,
delay: number = 300
) {
let timeoutId: NodeJS.Timeout;
return (query: string): Promise<T[]> => {
return new Promise((resolve) => {
clearTimeout(timeoutId);
timeoutId = setTimeout(async () => {
const results = await searchFn(query);
resolve(results);
}, delay);
});
};
}
// Throttling for scroll events
function createThrottledScroll(
scrollFn: (event: Event) => void,
delay: number = 100
) {
let lastCall = 0;
return (event: Event) => {
const now = Date.now();
if (now - lastCall >= delay) {
lastCall = now;
scrollFn(event);
}
};
}
// Usage
const debouncedSearch = createDebouncedSearch(async (query) => {
// Simulate API call
await new Promise(resolve => setTimeout(resolve, 100));
return [{ id: 1, name: query }];
});
const throttledScroll = createThrottledScroll((event) => {
console.log('Scroll event handled');
});
// Simulate usage
debouncedSearch('test');
debouncedSearch('test');
debouncedSearch('test'); // Only last one executes
3. Memory Management
Proper Cleanup
// Proper cleanup for event listeners
class EventManager {
private listeners = new Map<string, EventListener[]>();
addEventListener(element: HTMLElement, event: string, handler: EventListener): void {
element.addEventListener(event, handler);
if (!this.listeners.has(event)) {
this.listeners.set(event, []);
}
this.listeners.get(event)!.push(handler);
}
removeAllListeners(element: HTMLElement): void {
this.listeners.forEach((handlers, event) => {
handlers.forEach(handler => {
element.removeEventListener(event, handler);
});
});
this.listeners.clear();
}
}
// Proper cleanup for intervals and timeouts
class TimerManager {
private timers = new Set<NodeJS.Timeout>();
setInterval(callback: () => void, delay: number): NodeJS.Timeout {
const timer = setInterval(callback, delay);
this.timers.add(timer);
return timer;
}
setTimeout(callback: () => void, delay: number): NodeJS.Timeout {
const timer = setTimeout(() => {
this.timers.delete(timer);
callback();
}, delay);
this.timers.add(timer);
return timer;
}
clearAll(): void {
this.timers.forEach(timer => {
clearTimeout(timer);
clearInterval(timer);
});
this.timers.clear();
}
}
Weak References
// Using WeakMap for memory-efficient caching
class WeakCache<T> {
private cache = new WeakMap<object, T>();
set(key: object, value: T): void {
this.cache.set(key, value);
}
get(key: object): T | undefined {
return this.cache.get(key);
}
has(key: object): boolean {
return this.cache.has(key);
}
}
// Usage
const cache = new WeakCache<string>();
const user = { id: 1, name: 'John' };
cache.set(user, 'cached data');
// When user is garbage collected, cache entry is automatically removed
4. Async Performance
Concurrent Operations
// Concurrent API calls
async function fetchUserData(userId: number) {
const [user, posts, comments] = await Promise.all([
fetchUser(userId),
fetchUserPosts(userId),
fetchUserComments(userId)
]);
return { user, posts, comments };
}
// Batch processing
async function processBatch<T, R>(
items: T[],
processor: (item: T) => Promise<R>,
batchSize: number = 10
): Promise<R[]> {
const results: R[] = [];
for (let i = 0; i < items.length; i += batchSize) {
const batch = items.slice(i, i + batchSize);
const batchResults = await Promise.all(
batch.map(item => processor(item))
);
results.push(...batchResults);
}
return results;
}
// Usage
const items = Array.from({ length: 100 }, (_, i) => i);
const results = await processBatch(items, async (item) => {
// Simulate processing
await new Promise(resolve => setTimeout(resolve, 10));
return item * 2;
}, 20);
Caching Async Results
// Caching async results
class AsyncCache<T> {
private cache = new Map<string, Promise<T>>();
async get(key: string, factory: () => Promise<T>): Promise<T> {
if (this.cache.has(key)) {
return this.cache.get(key)!;
}
const promise = factory();
this.cache.set(key, promise);
try {
const result = await promise;
return result;
} catch (error) {
this.cache.delete(key);
throw error;
}
}
clear(): void {
this.cache.clear();
}
}
// Usage
const cache = new AsyncCache<string>();
async function fetchData(id: number): Promise<string> {
return cache.get(`data-${id}`, async () => {
// Simulate API call
await new Promise(resolve => setTimeout(resolve, 1000));
return `Data for ${id}`;
});
}
// First call fetches data
const data1 = await fetchData(1);
// Second call uses cache
const data2 = await fetchData(1);
Practical Examples
1. Performance Monitoring
Performance Metrics
// Performance monitoring utilities
class PerformanceMonitor {
private metrics = new Map<string, number[]>();
startTiming(label: string): () => void {
const start = performance.now();
return () => {
const end = performance.now();
const duration = end - start;
if (!this.metrics.has(label)) {
this.metrics.set(label, []);
}
this.metrics.get(label)!.push(duration);
};
}
getMetrics(label: string): { average: number; min: number; max: number; count: number } {
const times = this.metrics.get(label) || [];
if (times.length === 0) {
return { average: 0, min: 0, max: 0, count: 0 };
}
const sum = times.reduce((a, b) => a + b, 0);
const average = sum / times.length;
const min = Math.min(...times);
const max = Math.max(...times);
return { average, min, max, count: times.length };
}
getAllMetrics(): Record<string, { average: number; min: number; max: number; count: number }> {
const result: Record<string, { average: number; min: number; max: number; count: number }> = {};
this.metrics.forEach((_, label) => {
result[label] = this.getMetrics(label);
});
return result;
}
}
// Usage
const monitor = new PerformanceMonitor();
// Monitor function performance
function expensiveFunction(n: number): number {
const endTiming = monitor.startTiming('expensiveFunction');
let result = 0;
for (let i = 0; i < n; i++) {
result += Math.sqrt(i);
}
endTiming();
return result;
}
// Run function multiple times
for (let i = 0; i < 10; i++) {
expensiveFunction(1000);
}
// Get performance metrics
console.log('Performance metrics:', monitor.getAllMetrics());
2. Optimized Data Processing
Efficient Data Transformations
// Optimized data processing pipeline
class DataProcessor<T, R> {
private transformers: Array<(data: T[]) => T[]> = [];
addTransformer(transformer: (data: T[]) => T[]): this {
this.transformers.push(transformer);
return this;
}
process(data: T[]): T[] {
return this.transformers.reduce((acc, transformer) => {
return transformer(acc);
}, data);
}
}
// Usage
const processor = new DataProcessor<{ id: number; name: string; active: boolean }, any>();
processor
.addTransformer(data => data.filter(item => item.active))
.addTransformer(data => data.sort((a, b) => a.name.localeCompare(b.name)))
.addTransformer(data => data.slice(0, 10));
const data = [
{ id: 1, name: 'John', active: true },
{ id: 2, name: 'Jane', active: false },
{ id: 3, name: 'Bob', active: true },
{ id: 4, name: 'Alice', active: true }
];
const result = processor.process(data);
console.log('Processed data:', result);
3. Bundle Analysis
Bundle Size Analysis
// Bundle size analysis utilities
interface BundleAnalysis {
totalSize: number;
chunks: Array<{
name: string;
size: number;
modules: Array<{
name: string;
size: number;
percentage: number;
}>;
}>;
}
class BundleAnalyzer {
analyze(bundleStats: any): BundleAnalysis {
const chunks = bundleStats.chunks.map((chunk: any) => ({
name: chunk.names[0] || 'unnamed',
size: chunk.size,
modules: chunk.modules.map((module: any) => ({
name: module.name,
size: module.size,
percentage: (module.size / chunk.size) * 100
}))
}));
const totalSize = chunks.reduce((sum, chunk) => sum + chunk.size, 0);
return {
totalSize,
chunks
};
}
generateReport(analysis: BundleAnalysis): string {
let report = `Bundle Analysis Report\n`;
report += `Total Size: ${(analysis.totalSize / 1024).toFixed(2)} KB\n\n`;
analysis.chunks.forEach(chunk => {
report += `Chunk: ${chunk.name}\n`;
report += `Size: ${(chunk.size / 1024).toFixed(2)} KB\n`;
report += `Modules:\n`;
chunk.modules.forEach(module => {
report += ` ${module.name}: ${(module.size / 1024).toFixed(2)} KB (${module.percentage.toFixed(1)}%)\n`;
});
report += `\n`;
});
return report;
}
}
// Usage
const analyzer = new BundleAnalyzer();
// const analysis = analyzer.analyze(bundleStats);
// console.log(analyzer.generateReport(analysis));
Best Practices
1. Use Appropriate Data Structures
// Good: Use Map for key-value lookups
const userCache = new Map<number, User>();
// Good: Use Set for unique values
const uniqueIds = new Set<number>();
// Good: Use Array for ordered collections
const orderedItems: Item[] = [];
2. Implement Proper Caching
// Good: Implement caching for expensive operations
const cache = new Map<string, any>();
function expensiveOperation(key: string): any {
if (cache.has(key)) {
return cache.get(key);
}
const result = computeExpensiveResult(key);
cache.set(key, result);
return result;
}
3. Use Lazy Loading
// Good: Lazy load components
const LazyComponent = React.lazy(() => import('./LazyComponent'));
// Good: Lazy load modules
async function loadModule(moduleName: string) {
const module = await import(`./modules/${moduleName}`);
return module;
}
4. Optimize Bundle Size
// Good: Use tree-shakable imports
import { debounce } from 'lodash-es';
// Good: Use dynamic imports for code splitting
const feature = await import('./feature');
Common Pitfalls and Solutions
1. Memory Leaks
// ❌ Problem: Memory leak
class BadComponent {
private interval: NodeJS.Timeout;
start() {
this.interval = setInterval(() => {
// Do something
}, 1000);
}
// Missing cleanup
}
// ✅ Solution: Proper cleanup
class GoodComponent {
private interval: NodeJS.Timeout;
start() {
this.interval = setInterval(() => {
// Do something
}, 1000);
}
stop() {
if (this.interval) {
clearInterval(this.interval);
}
}
}
2. Inefficient Loops
// ❌ Problem: Inefficient nested loops
function findDuplicates(arr1: number[], arr2: number[]): number[] {
const duplicates: number[] = [];
for (let i = 0; i < arr1.length; i++) {
for (let j = 0; j < arr2.length; j++) {
if (arr1[i] === arr2[j]) {
duplicates.push(arr1[i]);
}
}
}
return duplicates;
}
// ✅ Solution: Use Set for O(1) lookup
function findDuplicates(arr1: number[], arr2: number[]): number[] {
const set2 = new Set(arr2);
return arr1.filter(item => set2.has(item));
}
3. Unnecessary Re-renders
// ❌ Problem: Unnecessary re-renders
function BadComponent({ data }: { data: any[] }) {
const processedData = data.map(item => ({
...item,
processed: true
}));
return <div>{/* Render processedData */}</div>;
}
// ✅ Solution: Memoize expensive calculations
function GoodComponent({ data }: { data: any[] }) {
const processedData = useMemo(() =>
data.map(item => ({
...item,
processed: true
})), [data]
);
return <div>{/* Render processedData */}</div>;
}
Conclusion
Performance optimization in TypeScript is crucial for creating fast, efficient applications. By understanding:
- What performance optimization involves and its key concepts
- Why it's important for user experience and resource efficiency
- How to optimize bundle size, runtime performance, and memory usage
You can create applications that load quickly, run efficiently, and provide excellent user experience. TypeScript's type system helps catch performance issues at compile time, while proper optimization techniques ensure optimal runtime performance.
Next Steps
- Practice implementing performance optimizations
- Explore advanced optimization techniques
- Learn about performance monitoring tools
- Move on to Chapter 14: Real-World Projects
This tutorial is part of the TypeScript Mastery series by syscook.dev