Technical Interviews: Difference between revisions
Line 52: | Line 52: | ||
====Load Balancers==== | ====Load Balancers==== | ||
* Purpose: Handle many connections and route them to request nodes | |||
* Algorithms: Random, round robin, custom criteria | |||
* Software: [https://www.haproxy.org/ HAProxy] | |||
* Usually placed at the front of a distributed system | |||
** Load balancers can send requests to other load balancers | |||
* Challenges: Managing user session data | |||
** Solutions: Caches, cookies, user data, URL rewriting | |||
* Cons: Makes problem diagnosis difficult | |||
====Queues==== | ====Queues==== | ||
==System Design== | ==System Design== |
Revision as of 21:20, 19 December 2019
Resources for Technical Interviews
Algorithms
Computer Systems
Web Architecture
Four Key Principles
- Availability
- Performance
- Reliability
- Scalability
- Manageability
- Cost
Services
- Separate each functionality into its own service so each can be scaled separately
- (e.g. Reading and Writing can be two services which access the same file store)
- Each service can then be managed separately.
Redundency
- Replicate nodes for each service using a "shared-nothing architecture."
- Each node should be able to operate independently.
Partitions
- Also known as shards
- Scaling vertically: Add more hard drives, memory
- Scaling Horizontally: Add more nodes
- Distribute data somehow: geographically, by type of user,...
- Make sure you can identify server from id of data (e.g. adding it to hashing or maintaining an index)
Caches=
- Place a cache on the request layer.
- Cache in memory, not disk (e.g. memcached, redis)
- Global Caches
- If you have several request nodes, you can put a global cache between the request node and the database
- Distributed Cache
- Each request node holds a cache for a specific type of data
- A request node may forward requests to other request nodes which have the corresponding cache
- Cons: May be hard to remedy if a node goes down
Proxies
- Used to filter, log, and modify requests
- Collapsed Forwarding: collapse the same or similar requests and return the same result to clients
- You can also collapse requests that are spatially close together on the database
Indexes
Load Balancers
- Purpose: Handle many connections and route them to request nodes
- Algorithms: Random, round robin, custom criteria
- Software: HAProxy
- Usually placed at the front of a distributed system
- Load balancers can send requests to other load balancers
- Challenges: Managing user session data
- Solutions: Caches, cookies, user data, URL rewriting
- Cons: Makes problem diagnosis difficult