Skip to content

Dhruvwalia05/TrafficManagement

Repository files navigation

🚦 Traffic Light Control System Using Reinforcement Learning

Python version


Description

This project implements a simulation of an intersection with four traffic lanes (north, east, south, and west) using Reinforcement Learning (RL) to optimize traffic light management. The goal is to minimize vehicle wait times and improve traffic flow efficiency by dynamically adjusting the green light durations based on real-time traffic conditions.

Key Features

Reinforcement Learning-Based Control: The system uses an RL agent to decide which lane should get the green light based on the current traffic conditions.
Dynamic Traffic Simulation: Random vehicle generation simulates real-world traffic flow, with distinct vehicle densities for peak and off-peak hours.
Customizable Environment: The intersection can handle different traffic loads with parameters such as green light duration, vehicle cross time, and max allowed waiting time for vehicles.
Four-Lane Intersection: Traffic is managed for four lanes—north, east, south, and west—each of which can have its own independent vehicle count and traffic light state.
Flexible Reward System: The reward mechanism incentivizes efficient traffic flow by providing positive rewards for clearing traffic within the allowed time and penalizing delays.

Environment Details

Action Space: Discrete action space representing the 4 lanes, with the agent choosing which lane gets the green light.
Observation Space: The current state is represented by the number of vehicles waiting in each lane.
Vehicle Cross Time: The time it takes for each vehicle to pass through the intersection (configurable, default is 3 seconds per vehicle).
Green Light Duration: The default green light duration is 10 seconds, but it adapts dynamically based on the traffic load.
Peak/Off-Peak Simulation: Randomized vehicle generation simulates varying traffic density based on time of day, accounting for morning and evening rush hours.
Max Steps Per Episode: Each episode has a limit of 100 steps, with penalties applied if this limit is reached without efficient traffic management.

Simulation Workflow

Vehicle Generation: Random vehicles are generated for each lane based on the time of day (peak or off-peak hours).
Traffic Light Management: The RL agent observes the traffic situation and chooses which lane receives the green light to optimize vehicle flow.
Reward Calculation: Positive rewards are given for clearing vehicles efficiently, while negative rewards are applied for traffic delays or inefficiencies.
Step and Episode Tracking: Each episode progresses until the maximum number of steps is reached or all lanes are cleared of vehicles.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages