From f32c39bd6a8b130195a9d60e437b0d4eabb0301a Mon Sep 17 00:00:00 2001 From: ll7 Date: Fri, 4 Oct 2024 10:29:01 +0200 Subject: [PATCH 01/28] Refactor student_roles24.md Fixes #304 Co-authored-by: JulianTrommer --- doc/08_dev_talks/paf24/student_roles24.md | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/doc/08_dev_talks/paf24/student_roles24.md b/doc/08_dev_talks/paf24/student_roles24.md index 552005c2..6e6ba116 100644 --- a/doc/08_dev_talks/paf24/student_roles24.md +++ b/doc/08_dev_talks/paf24/student_roles24.md @@ -2,13 +2,14 @@ ## Role overview -2-3 Students per Role +2-4 Students per Role - **Systems Engineer** - Oversee the entire development process, ensuring smooth interaction between different subsystems (perception, planning, control, decision-making, etc.). - Define system-level architecture, ensuring each module (e.g., sensors, planning, control) interacts through well-defined interfaces. - Manage requirements (e.g. in issues) and ensure each team's outputs align with the overall system goals, including performance, reliability, and safety standards. - Serve as the point of contact for inter-team communication, ensuring alignment between roles such as Perception Engineers, Control Engineers, and Decision-Making Engineers. + - Take responsibility for identifying and managing dependencies between subsystems and methods, especially in relation to the timeline. Ensure that the sequence of development is logical and efficient, avoiding resource investment in features that rely on unfinished or unavailable modules. For example, avoid focusing efforts on decision-making algorithms that depend on perception data (e.g., stop lines) if it’s clear that the sensors or detection mechanisms won't be ready until later stages of the project. - Develop and enforce a systems integration strategy that covers continuous testing, validation, and verification of the autonomous driving stack. - Ensure proper data flow between modules using middleware (e.g., ROS). - Define and monitor key performance indicators (KPIs) for each subsystem, ensuring they collectively meet reliability, stability, and safety goals. @@ -25,14 +26,6 @@ - Collaborate with perception, planning, and control engineers to ensure the decision-making module aligns with the data and actions generated by other subsystems. - Simulate and validate decision-making in various complex driving scenarios within CARLA, such as navigating congested traffic or adverse weather conditions. - Ensure decision-making algorithms are interpretable and explainable to enhance debugging and safety validation. -- **Machine Learning Engineer** - - Implement machine learning techniques (e.g., deep learning, reinforcement learning) to improve various subsystems in the autonomous driving stack. - - Train neural networks for perception tasks (e.g., image segmentation, object detection, classification) using both simulated and real-world datasets. - - Develop and optimize behavior cloning, imitation learning, or other algorithms to enable the vehicle to learn from human driving examples. - - Integrate machine learning models into the perception or decision-making pipeline, ensuring smooth interaction with other system components. - - Collaborate with Perception Engineers to fine-tune sensor fusion models using AI techniques for improved environmental understanding. - - Analyze model performance and iteratively improve accuracy, efficiency, and real-time processing capability. - - Monitor and manage the data pipeline for model training, ensuring data quality, labeling accuracy, and sufficient coverage of edge cases. - **Perception Engineer** - Develop and improve sensor models (e.g., camera, LiDAR, radar) within the simulation, ensuring realistic sensor behavior and noise characteristics. - Implement state-of-the-art object detection, tracking, and sensor fusion algorithms to accurately interpret environmental data. @@ -57,7 +50,7 @@ - Ensure path planning algorithms balance safety, efficiency, and passenger comfort while maintaining vehicle controllability. - **Control Systems Engineer** - Work on the low-level control of the vehicle, including steering, throttle, braking, and handling. - - Implement advanced control algorithms (e.g., PID, MPC) to ensure the vehicle follows planned paths with stability and precision. + - Implement advanced control algorithms (e.g. MPC) to ensure the vehicle follows planned paths with stability and precision. - Tune control parameters to ensure smooth and reliable vehicle behavior under dynamic environmental conditions. - Collaborate with Path Planning Engineers to translate high-level paths into precise control actions. - Ensure the control system reacts dynamically to changes in the environment (e.g., obstacles, traffic conditions). @@ -82,7 +75,6 @@ graph TD SE[Systems Engineer] --> DME[Decision-Making Engineer] SE --> PE[Perception Engineer] - SE --> MLE[Machine Learning Engineer] SE --> LME[Localization and Mapping Engineer] SE --> PPE[Path Planning Engineer] SE --> CSE[Control Systems Engineer] @@ -92,9 +84,7 @@ graph TD DME --> PE DME --> PPE DME --> CSE - DME --> MLE - - PE <--> MLE + PE --> LME PE --> PPE @@ -108,14 +98,12 @@ graph TD IE --> SE IE --> TVE - IE --> MLE LME --> DME subgraph Module Teams DME PE - MLE LME PPE CSE From 197c02ebea73af784ee70b64756e10e8980235d7 Mon Sep 17 00:00:00 2001 From: ll7 Date: Fri, 4 Oct 2024 10:34:07 +0200 Subject: [PATCH 02/28] Add git-mob extension to .vscode/extensions.json Fixes add vscode co-author extension to vscode #306 --- .vscode/extensions.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.vscode/extensions.json b/.vscode/extensions.json index 394eaac6..7285c05b 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -8,6 +8,7 @@ "njpwerner.autodocstring", "ms-azuretools.vscode-docker", "ms-python.flake8", - "bierner.markdown-mermaid" + "bierner.markdown-mermaid", + "richardkotze.git-mob" ] } \ No newline at end of file From 1db1d6e4d4df5ad53b9abe3150cc5ecd074ff297 Mon Sep 17 00:00:00 2001 From: ll7 Date: Fri, 4 Oct 2024 11:48:39 +0200 Subject: [PATCH 03/28] Validation Engineer role abbreviation --- doc/08_dev_talks/paf24/student_roles24.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/08_dev_talks/paf24/student_roles24.md b/doc/08_dev_talks/paf24/student_roles24.md index 6e6ba116..666ac17a 100644 --- a/doc/08_dev_talks/paf24/student_roles24.md +++ b/doc/08_dev_talks/paf24/student_roles24.md @@ -78,7 +78,7 @@ graph TD SE --> LME[Localization and Mapping Engineer] SE --> PPE[Path Planning Engineer] SE --> CSE[Control Systems Engineer] - SE --> TVE[Testing and Validation Engineer] + SE --> TVE[Testing and Validation Eng.] SE --> IE[Infrastructure Engineer] DME --> PE From 4fae0639bcdd456d7a6204563cabffb7e2ad7547 Mon Sep 17 00:00:00 2001 From: ll7 Date: Fri, 4 Oct 2024 11:48:39 +0200 Subject: [PATCH 04/28] Refactor student_roles24.md --- doc/08_dev_talks/paf24/student_roles24.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/doc/08_dev_talks/paf24/student_roles24.md b/doc/08_dev_talks/paf24/student_roles24.md index 666ac17a..8c8f6e4c 100644 --- a/doc/08_dev_talks/paf24/student_roles24.md +++ b/doc/08_dev_talks/paf24/student_roles24.md @@ -9,7 +9,8 @@ - Define system-level architecture, ensuring each module (e.g., sensors, planning, control) interacts through well-defined interfaces. - Manage requirements (e.g. in issues) and ensure each team's outputs align with the overall system goals, including performance, reliability, and safety standards. - Serve as the point of contact for inter-team communication, ensuring alignment between roles such as Perception Engineers, Control Engineers, and Decision-Making Engineers. - - Take responsibility for identifying and managing dependencies between subsystems and methods, especially in relation to the timeline. Ensure that the sequence of development is logical and efficient, avoiding resource investment in features that rely on unfinished or unavailable modules. For example, avoid focusing efforts on decision-making algorithms that depend on perception data (e.g., stop lines) if it’s clear that the sensors or detection mechanisms won't be ready until later stages of the project. + - Take responsibility for identifying and managing dependencies between subsystems and methods, especially in relation to the timeline. Ensure that the sequence of development is logical and efficient, avoiding resource investment in features that rely on unfinished or unavailable modules. + - For example, avoid focusing efforts on decision-making algorithms that depend on perception data (e.g., stop lines) if it’s clear that the sensors or detection mechanisms won't be ready until later stages of the project. - Develop and enforce a systems integration strategy that covers continuous testing, validation, and verification of the autonomous driving stack. - Ensure proper data flow between modules using middleware (e.g., ROS). - Define and monitor key performance indicators (KPIs) for each subsystem, ensuring they collectively meet reliability, stability, and safety goals. From 3a17b0472bdf980ec669b152393aacdc23a6beda Mon Sep 17 00:00:00 2001 From: ll7 Date: Fri, 4 Oct 2024 12:16:45 +0200 Subject: [PATCH 05/28] remove redundant whitespace --- doc/08_dev_talks/paf24/student_roles24.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/08_dev_talks/paf24/student_roles24.md b/doc/08_dev_talks/paf24/student_roles24.md index 8c8f6e4c..2822486c 100644 --- a/doc/08_dev_talks/paf24/student_roles24.md +++ b/doc/08_dev_talks/paf24/student_roles24.md @@ -9,7 +9,7 @@ - Define system-level architecture, ensuring each module (e.g., sensors, planning, control) interacts through well-defined interfaces. - Manage requirements (e.g. in issues) and ensure each team's outputs align with the overall system goals, including performance, reliability, and safety standards. - Serve as the point of contact for inter-team communication, ensuring alignment between roles such as Perception Engineers, Control Engineers, and Decision-Making Engineers. - - Take responsibility for identifying and managing dependencies between subsystems and methods, especially in relation to the timeline. Ensure that the sequence of development is logical and efficient, avoiding resource investment in features that rely on unfinished or unavailable modules. + - Take responsibility for identifying and managing dependencies between subsystems and methods, especially in relation to the timeline. Ensure that the sequence of development is logical and efficient, avoiding resource investment in features that rely on unfinished or unavailable modules. - For example, avoid focusing efforts on decision-making algorithms that depend on perception data (e.g., stop lines) if it’s clear that the sensors or detection mechanisms won't be ready until later stages of the project. - Develop and enforce a systems integration strategy that covers continuous testing, validation, and verification of the autonomous driving stack. - Ensure proper data flow between modules using middleware (e.g., ROS). From b2876905541f5c97af2f8399f3d6304a8e90b8d0 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Mon, 7 Oct 2024 09:48:10 +0200 Subject: [PATCH 06/28] Changed route of dev to simple route --- .vscode/extensions.json | 3 ++- build/docker-compose_dev_distributed.yaml | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/.vscode/extensions.json b/.vscode/extensions.json index 7285c05b..ed5f4b16 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -9,6 +9,7 @@ "ms-azuretools.vscode-docker", "ms-python.flake8", "bierner.markdown-mermaid", - "richardkotze.git-mob" + "richardkotze.git-mob", + "ms-vscode-remote.remote-containers" ] } \ No newline at end of file diff --git a/build/docker-compose_dev_distributed.yaml b/build/docker-compose_dev_distributed.yaml index fe805a93..e8f8b127 100644 --- a/build/docker-compose_dev_distributed.yaml +++ b/build/docker-compose_dev_distributed.yaml @@ -11,6 +11,7 @@ services: command: bash -c "sleep 10 && roslaunch agent/launch/dev.launch" environment: - CARLA_SIM_HOST= + - ROUTE=/workspace/code/routes/routes_simple.xml networks: carla: \ No newline at end of file From 254c7f5b1d57e847bb763e8579b45b8e64a902f6 Mon Sep 17 00:00:00 2001 From: ll7 Date: Mon, 7 Oct 2024 10:53:51 +0200 Subject: [PATCH 07/28] [Bug]: vulkan device Fixes #310 --- doc/00_assets/vulkan_device_not_available.png | Bin 0 -> 10524 bytes doc/01_general/02_installation.md | 39 ++++++++++++++++++ 2 files changed, 39 insertions(+) create mode 100644 doc/00_assets/vulkan_device_not_available.png diff --git a/doc/00_assets/vulkan_device_not_available.png b/doc/00_assets/vulkan_device_not_available.png new file mode 100644 index 0000000000000000000000000000000000000000..f0aa64d2cf6ff80c3adfcce522ba1ad639bb4c61 GIT binary patch literal 10524 zcmeHt_gj-ow>G^9qGCZ51Suj?4I;g$h(IV?l-{L8ij)Y38bnk?I-yq!MS2$@1W+~v zLX%wEWH1K^L8S#I+@g<0e@$Y->VO#RfYl(GeIw5>&s>_yF(}&k( zXwXpF18GGFHC)Ys}mtyh|a8=>63Pa^mD@@SV2PeHYF411BJaK3^cQVc1%CD%> zewD3H-x=&(rv4PzAD22;mUG|X#&3uIwJ1}5LG2@L-%@+H=o4wM-1KSfM1AqMJ3agA zm?VaOSxR1X8l{tcS~I~w+7@`_6?A__i!i@A5mt#5N`9Sj=J4>d$bj0~+O)K^!otGv z@bHh&I$2rS>gsB5Z}08xZHV5;D8gtztZ87WW%5NPuV`(+$#mT_LfZ>IL|Crk^4~!M z;ZiPSTHV+{AP|)m6@K$8c?AUpXtYn{KHM(r$=2v*3v^$tBjyBHMaX5Ao=wTfdCw9$rkpsccwZFskt>1gygxq6}Q_Z1yDrjYc=H`AR+! zyx>8~-{d2>Ha#zTsYVWVfMr0c96~m{3_&#Kd@|cBh;;+PfpKd!lK7FMb0w$aqFO(@ zOSjl8lQu?_kh`9$SJ?lEe*OA&Qc@D72>ZigMdRk|vA_7P?eMFAbQ!bI? zgA3&q6)$%4lcq&sooOD^2pzh4U3rJLMi!X1?{b^x(w}$Tr44qz-9*wHZ{`iK#)iv; zKb|Y7oRuK{sO)r0!Cvsu)Tzd9qpcL4++ji5MRInGGO39*9Cw$lE{@c>$Sph?H~i`< zChQFlX4#H|NcF9!{O1Rp-*n_(Jf0qY=3^ZDly&| zFPn0fQ#*nF1~Zs^Tz_CStXl@5ZHf zHi%O3KpFOp_4VtLl0esAqn~XcUf>DxL4~KpcWc=U{lvrFJ)z>5((bQ?%8^>|kqS>& zfBz=l#3j*-m zu{v`Xl~GZlmm9m>R={TNY1N)28LaZIwstQw-dy7nDvi)XVSc`olaudw zn3Rpx?b|O_wuy0^nGV zQt8MqJx5}vHkZc3$FE(#o}@`V>$<(OgIM@JDcu3AY+t?=#PaCJ$47mIW{&uLH;(B5 z`U~PKQ*wln5AWXnTqX4AFI>{|^CN|G%FWKrc?6NMh;_r~84v9SP>+n^kM z|Ned0cSQ^9b^*CL28tj69SA1C;NI?5Zf@@Ow)e{7bEl0Kfpk9BU>pvow6_#d8gFrv zvG^RLS+Nn?rX^Zwf2%VcGr87kvDfXE^718J+4afnUSZ&kgy;G-dI=o&3@+6CS&ze> z;=gtFB!qWCFb3S}1RxHUf;jA-zJ)w}oUh;qpjb{hY zP^LP6-N}_>Iu$WYSXeOo=wxJTYYT&Aw}`)C!MdeXSId$T#{o7ZuU)(EyO&p5+MN+g z%s{nF=pdI*V%^d_e=?e_h#{Alux=k+Mw~Bs1g$KhiO9W{BC$OpzQ6>GRM;4_9)h(<3T|ijZqUB=g9s^O1jOEDQDX4VK!5sDQ+h4Y#JCmgP`lujAcpocprcUlW94 zRZ7&vfBPmAZjH>3v|+I3xnoDJyP#ew6W{tU*Ogchty1|d<8M5XmUz&}YljuEn02jn zSitdK+P0wg#>U3Op5sSBD78ve#C#T)2ZCk~#_3gPLx$qUR z*C=g{2BxpsaUnO~R-_vFYz!fiy;gjbZ!)e3_4nqSOj7F^6E-eyUV-J%HhW&U$ZgJ+ zg}oRL_O)ak{Mzd34C48h1O^sra{LTwb`_PT`nnBQkm+1y9X~5We5JMM^pTGVmC&va zMcreDy{6=?C-KzG!U9PAf)5`a0)^&@A5!GKbW4j&tCq_m%t!9>Kb` zWj9a!*#6;+Zg)!o<>|)H$vbz8j4Q7A2K97xm4I!kF@!B$)&sX4V|?e%9Z;4A!DKd3 zQ*qjKIfU~-sjZNH{)gP$>hW;7iBro3{ysiv!azrwf)Ik5RU_1<*ch@R)(sMSRm^`2 z3M#Q*W3FCa+ZV9!fc+15{y}uRnm6u0hNr7F^Jig9%k08(v>?x~MvaS;d#RVV>sMeD zzDrYkvAl?#u?-1|{4&qQ2Og66%;8oSHd&6=L@00%_X-ucw)>yCCMijnk5E`o*qR>z z#Yl6XR|)#4HBKbmkHc}O+JlpmbJ=g5h}=Mf)KcMbn*~u2(3TFrXJ&@+*;oWQ>)jH~ zKPH!A1KHXfk4T;boj`xNJ!;QAF=w`{vu4O|W5}e^8Mv1;XR+^qLoT-nwBG=ohA@59 za&$|^R*34iZ{OP6+f6at;O5e`{HU5M-9T~U944?eN%bZ1Y z!`EQB)Uya8e0_dEEBU&S|AyTgC6Oc+f%+5^a=T_nBP%DzCT2b_-SNYvu0UdabyXge zpNfhKu>TYmI+M27rtU}xpjwV8Zd!tWfYusw#n#(Y_4p*co2O0BoI7la~J z=5doSxe03-D%n-L&YU|Hc%k&QCxG1Fvcjth3u5bkq=F3t#(@ z<^>S+cSqjL6|lIt7%38V+74Nxq4puKq;-;IdWIk&pp$lO-`b?vf#KHQgF*N8;Rv}g ze)7b42#RY3-R)L$eqcqJeAZJ?K-&CF-E7Rx$spp;fRo;H9KY z=U^(vZHn$Hq)gWC!u@V89z35Q<6OMcxo0vtt zcRuV=SEjFHpR^JVpLzmJPrqqs`6h5@=#VAcqi5bUxFw7KOXaF7mI&#(3AV$Mxg*P{ zo&-H^eU|+H98lr+pRxpnvm8Dul;x1+IIQNYa0*Us z_cU9&^A@^?qwt=}UD{pxu%5>`DP>b;R}VzZ~LDGvoShMOdPL zGy2G(ksQGMw+;XD$PB~RF_mefsfQ@7fkVP<{s1Pi(El@+lIFKbpPSYK+(~xAss4jn?!fTQUb3tKE z&0nH{RdO}xWL%&l!>9AEj^dXZK3yMGp{(o{mDcPKziQ8Bd4}clnZSH;$9NdEnT{Jv3WItPMB3ZLmM_s0qk%c?#*SvVGv05kR}Hbu1JWqY4m}asn){p0F|;so zJsbwYdc1UWw0IPa^f1oLhj!}8szxXT(ksc(m@wBiQ`$D~owU96;ZnjIVEhy z^SZ}oj(tc#B|OIY)@OKSH*(sy(@<lzkha=JU_qg(>8tJ7(Gs4{8C;q9r-eG%QAQ8k=@5^+FNHdCZ5KmvaQAkp>9jJIEfF( z=DncHo{GGkKlydO2a?0(;64Q5k{7f}(w&>3a1`!rlQ7pwQZo|eC$Mvht)vA$Rhx_R z;oZ3E5eiGyKU(^NIiCXaqNawg$v9JNtzUzu)t3A7VnA?|*GA@GbU;RIK-FowKPBpf z4h_dvn!q=!Y4>}#38EOD$_b8h<>!!>qlM1Vu z&Zx*ii?^^B7kRW+U)$MUD4bsFH%r(ll(aUhmT1;=&8V8&8^V}QcP%ZdC3Xcywt6J= zZ)?qZv9~G{7iI8KcKI+HghFJAMaF>I$Tw?@Y3E!CEC*uGv&N)1-D~w(hCf0$zQXPe zb^ckQGLM`U!^->=Z{AK3`S`hA$a*NV5}^vsS4k&ZaDkpLYI>5LC}aP~L9{_dBa}Hq z+)>rS_=iztTjoecTD)4PIwpjsS$VdX+UQ+_JT;&0NUR=2+=?YOylra*`Dn{)GjTkyTi}nr~wVsuazCo8ZNkErIof3vl#0B@|t!CkGZipBap47O` zz9|ypEibj)Mqtkou)W7VgI+8-eJLKoP^S`ZuP2&gGupw9Y|?mOrg1d-E9WSpRmFIT z5@jdmX9yy|{lloj$Z9bxpCT2j!5@!jMQ8{2QVW&s=w^Sj77CsgS+H)xU)jhMt(Sm# zLv3PV>Tm<%qSVJ+c##MF%zb^z?KsKnUW0Ye1SdXfKgT!cTZ|v`Y``}}FY?;HWnMK; zv*xsK(pLaEo{FWJV`rc9REV3Nkr8Yy2+)h=ixNjYdP9!O>R_{D$ihxQ-#P1PuXI@mVLdU%`#JuH$QjtcNChX z7t~OnU8)i*k$s=HWj?I@I8WvhkDWabMb zMa&bR;8oz|=c#Oh#~t5@E~Kd2ZE=9Xb#_a?X$vZXau9HoHrG4p!Q5 zRg!@F^R^BhSBr#h+^3Sk>@-wbN(i$NratK;x|R{Ef{V1qh0b3n9+ar7crgv{R39)tw#!rC zg;9y{NJweU{R8VW3{mHcWp29(f))E!B3rHrH{~UWg2zQB8!t`i;&Jsc4x$PI1!Xe{ z;+6THN;GxW#;xs{X))HujWjNNpJiivxgFQc2Gn%6Crcjiz5TP*oK=vTQ`ichap2~k z1ztkIwY}tGlwZ1bb1v?((N%a33UIELnUaWx_*2L#SjR6EUKM{+CWcw_>GPI)**B6d z3#`8O1(NY;DBai#GiqK*-nvivP^>xkz`ethedh-@Rk8OLtM3s&{!9wal|SPj7E5kR z4#*iGszhrMd9X2PeeF6ZCM3KB>|q0H^vzjms8ao7^1N#qPJAsSP*pgsn`E;yx>2Qr znn_SbhJ+YQzGWhUjeEd7RJaf2EqvPhQhD4Qw6jFg9(eMy`cb@DqK?ej`$5*T#c4rl zF*gggpGEL74&@0y>>M}B(+urkha&`sAG38@XNC4w5Tb3~o<9CZpm61bRo`C&4JxzN zv)?l;9WCLrpj6sKst3oL=$0|(pRH5FMt7Vx3;uW(u~h)Qk-&`iWU7?hceL$g-d z9<|{$E0~4{_clV0$`*vjcy*?1Z?dwB7e^b$FTVIpshgcLBsR=xuUE)6Yk4oaEg`^3 zfVLG5680X;vv{aBrbt*lagfGvDf*uF1pPZ+Td_{{syuqu#8S`TYt%wucmE7=;rwGt zfj+Hu2qoA@=aNgr89s?(7kQ6Z4eh0b;B!_pY|hSP#QtGX{a6FOp)A%bCazF~jlS`u z3$d2bHm6=@>g;8XI|J{tiOe3YG`$|JCKPKr%M5uU222cQb}!EGIL4=$9j$Jb%@Qfk zO7JrHk{>|hBO2clgEp~W%XG53#GlP>2q3^J%xK%BAw&uJ6&j=N$ZNz;-Kj=aHU}E5 z4z{q^$uyK*aEIm3dvtTZC;U6q$1~HKSUb;kr!iajZZ?;P zSX~6&ySOLip0%WB!!?NKp4*g6qU!Rl>CGV20;e{zR#ZmvP=8Ng{BOsgo;f$87ylec zm1Dr^=zl+5+?}nFVfw!{4~}>L=g}VSboxWMb$X&+50N;?IHK^qm(^k^Ip4A@>}m3f}7^z?vWj7MY9=WtcaB5Zxadx zZ2zl|Pu-=re%W=TVY%Km1h3B}_K?{RAK zSen-*4cNHw0Z5U66+lAT4k(t1>iS8RY&-o2_KEitt?UZZgSHQD#=iiHFoBqLah0N8wR0ebkS6vzVGqy7(fbEK7V!jb%Eh#b8 zoy3u(7{nsbY7rI_1T?YH|Cb_dd-fVYS>V3;6TnC;(W!KL?G8Vvsisr|U&ojSc4TUT zBuvF5)^C%g-2P{BfECg8Wj#{fvXE?#?*}g)`TYLD7E6@HAGd=lk zyd2PGib&Xd3raABTZ#5%R>M^^^A1pCYl~;P7u@Yu{-&Fh70LYI0@p4F2f)Cqn!jeA z+=`b3D+R=WVS>R72V89P-QWBiHy4?FT#b127lWntAGI@ljgA8=qS)Rv1n=@rJq3PI z9{!tu+qw*ltMvc%*p&>m<^P0#!UE56oRUTt49no-IpD+oA-I5#B3WRa;}H`LeN$oQ z;j5kYe@Q$Gcr~(t5?>7%mjK8VKvGS7BX;?`N?{-)x&Geoc>T>H|A(O?419qElYt2o zCF^JYUpK~;T*>L;-{Zhw&%Z7C{h{)Im_!8=!%K%=Giet3?WY5C$d6ayAo~vP=MCMu zK3w(sHtc3d1|M11gYOfR!N(HTjl7yc5>YI$8}3PcuQ0kYhnYhwnoHed&#nGS)!#GW zHX;tHz z071|`b*E_>hvM| zAw@<<(d(Rpk713WA(Z26LrbeGom`#|3FXltMFBsn(f+HuE1jgCK7lJ?Ze`i}e%6cm z_V|zt*Sjmnjv*@xeb)`LI?ns|bn_=8i2lfZOvTCrYA=z-tf43SkI{8kce)p;jUJwE zIJoWn3wzILk2J-B*JXeO<-cFxYBO9)*_;5c?cAGcf`U8J_R!PGmtf%SuN>NOsMyQp z*l$!l8t1uu7L`~OkR#u}$4#3LSLWC}+viRGg>YQQeD}6!e^_*~F{{@Jxj4z?Wab@K zC08|;&Z4w7Bw(?reap+qlpxu9LH^V95|OmiQ`wQ4w9J3in^4GAw@aF}Pd9)dk?o7| z7RZV9IlAT7uiFiGnxQ`Dara!}OD|CUgV}dNma=U2E@TzSFA7;0dJ`U{K((}MJXM_@ z80A0Qo-AVQdEjeXQa>X4AyE#Y@AM_q=*O`-zh7xNCrRmjD%0y+q`r`izOZ%aj!*)k zvX+G7?I?Mpns&aB^{?pl0 zef|q0_uVJ6_OsWc`XUzclHFwVoW2b5M_@cwcPcIBN{U=6mIkzRFn+(d&crM&AG#*0 zhgy0&JtG+59^96Aa1z!O0t1!=$G3&(u)A50^)5rTI>1XrAQm9G%0w~g5)%6IfkhlA z5)_C5@ApZ|BwLg`*Fc+CyPEWmG@q4hCG$$x@Ew&UFD)I??hs5$eky`}@5* z{v?;mjzo5$G>`WAMvMLR+P=LJ#~-<^LpyYBr{Y{lU1+g_{qOlt&mOv;ny9P?0BR)4 ph^>h|{)w74cW}WmLHF4dKhSJPsji;^Z#`0|-_}+sylMIDe*iZs1IhpZ literal 0 HcmV?d00001 diff --git a/doc/01_general/02_installation.md b/doc/01_general/02_installation.md index 669ff43c..d0d5914a 100644 --- a/doc/01_general/02_installation.md +++ b/doc/01_general/02_installation.md @@ -103,3 +103,42 @@ sudo systemctl restart docker ``` ([possible reason](https://stackoverflow.com/a/73256004)) + +### Vulkan device not available + +Cannot find a compatible Vulkan Device. +Try updating your video driver to a more recent version and make sure your video card supports Vulkan. + +![Vulkan device not available](../00_assets/vulkan_device_not_available.png) + +Verify the issue with the following command: + +```shell +$ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi +Failed to initialize NVML: Unknown Error +``` + +> [!TIP] Solution found in https://stackoverflow.com/a/78137688 + +```shell +sudo vim /etc/nvidia-container-runtime/config.toml +``` + +, then changed `no-cgroups = false`, save + +Restart docker daemon: + +```shell +sudo systemctl restart docker +``` + +, then you can test by running + +```shell +docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi +``` + +Based on: + +1. https://bobcares.com/blog/docker-failed-to-initialize-nvml-unknown-error/ +2. https://bbs.archlinux.org/viewtopic.php?id=266915 From 7aba60011a276839797e54a61659699d2c82d815 Mon Sep 17 00:00:00 2001 From: ll7 Date: Mon, 7 Oct 2024 11:00:36 +0200 Subject: [PATCH 08/28] Fix broken links in installation documentation --- doc/01_general/02_installation.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/01_general/02_installation.md b/doc/01_general/02_installation.md index d0d5914a..bd42665b 100644 --- a/doc/01_general/02_installation.md +++ b/doc/01_general/02_installation.md @@ -118,7 +118,7 @@ $ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi Failed to initialize NVML: Unknown Error ``` -> [!TIP] Solution found in https://stackoverflow.com/a/78137688 +> [!TIP] Solution found in ```shell sudo vim /etc/nvidia-container-runtime/config.toml @@ -140,5 +140,5 @@ docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi Based on: -1. https://bobcares.com/blog/docker-failed-to-initialize-nvml-unknown-error/ -2. https://bbs.archlinux.org/viewtopic.php?id=266915 +1. +2. From 17ed54a47c942822675df0eb4a6939f57c1480ae Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Mon, 7 Oct 2024 11:19:19 +0200 Subject: [PATCH 09/28] Updated all occurences of name paf --- README.md | 2 +- build/config-comlipy.yml | 2 +- build/docker-compose_cicd.yaml | 2 +- build/docker/agent/Dockerfile | 2 +- build/docker/agent/Dockerfile_Submission | 2 +- code/agent/src/agent/agent.py | 6 +++--- .../src/position_heading_filter_debug_node.py | 6 +++--- code/planning/src/local_planner/utils.py | 2 +- doc/01_general/03_commands.md | 2 +- doc/02_development/12_discord_webhook.md | 2 +- .../01_acting/04_paf21_2_and_pylot_acting.md | 4 ++-- .../02_perception/05-autoware-perception.md | 4 ++-- .../02_perception/06_paf_21_1_perception.md | 2 +- doc/03_research/02_perception/LIDAR_data.md | 12 ++++++------ .../object-detection-model_evaluation/globals.py | 2 +- pc_setup_user.sh | 4 ++-- 16 files changed, 28 insertions(+), 28 deletions(-) diff --git a/README.md b/README.md index 4998d4ac..b4495249 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Praktikum Autonomes Fahren 2023 - PAF23 +# Praktikum Autonomes Fahren - PAF This repository contains the source code for the "Praktikum Autonomes Fahren" at the Chair of Mechatronics from the University of Augsburg in the winter semester of 2023/2024. The goal of the project is to develop a self-driving car that can navigate through a simulated environment. diff --git a/build/config-comlipy.yml b/build/config-comlipy.yml index 9fffe84d..1eb36ffc 100644 --- a/build/config-comlipy.yml +++ b/build/config-comlipy.yml @@ -1,6 +1,6 @@ # comlipy config file (commit naming) global: - help: 'Help: https://github.com/ll7/paf22/blob/main/doc/developement/commit.md' + help: 'Help: https://github.com/una-auxme/paf/blob/main/doc/02_development/03_commit.md' rules: scope-min-length: diff --git a/build/docker-compose_cicd.yaml b/build/docker-compose_cicd.yaml index c792c6d0..7fd57177 100644 --- a/build/docker-compose_cicd.yaml +++ b/build/docker-compose_cicd.yaml @@ -6,7 +6,7 @@ include: services: agent: - image: ghcr.io/una-auxme/paf23:${AGENT_VERSION:-latest} + image: ghcr.io/una-auxme/paf:${AGENT_VERSION:-latest} init: true tty: true logging: diff --git a/build/docker/agent/Dockerfile b/build/docker/agent/Dockerfile index ea981056..107ece04 100644 --- a/build/docker/agent/Dockerfile +++ b/build/docker/agent/Dockerfile @@ -29,7 +29,7 @@ RUN apt-get install wget unzip # Download Carla PythonAPI (alternative to getting it from the Carla-Image, which is commented out above) # If the PythonAPI/Carla version changes, either update the link, or refer to the comment at the top of this file. -RUN wget https://github.com/una-auxme/paf23/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ +RUN wget https://github.com/una-auxme/paf/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ && unzip PythonAPI.zip \ && rm PythonAPI.zip \ && mkdir -p /opt/carla \ diff --git a/build/docker/agent/Dockerfile_Submission b/build/docker/agent/Dockerfile_Submission index bb6757d2..a329247e 100644 --- a/build/docker/agent/Dockerfile_Submission +++ b/build/docker/agent/Dockerfile_Submission @@ -26,7 +26,7 @@ RUN apt-get update \ # install dependencies for libgit2 and Carla PythonAPI RUN apt-get install wget unzip -RUN wget https://github.com/una-auxme/paf23/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ +RUN wget https://github.com/una-auxme/paf/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ && unzip PythonAPI.zip \ && rm PythonAPI.zip \ && mkdir -p /opt/carla \ diff --git a/code/agent/src/agent/agent.py b/code/agent/src/agent/agent.py index f3f7b4a0..5a97786b 100755 --- a/code/agent/src/agent/agent.py +++ b/code/agent/src/agent/agent.py @@ -4,10 +4,10 @@ def get_entry_point(): - return 'PAF22Agent' + return 'PAFAgent' -class PAF22Agent(ROS1Agent): +class PAFAgent(ROS1Agent): def setup(self, path_to_conf_file): self.track = Track.MAP @@ -88,4 +88,4 @@ def sensors(self): return sensors def destroy(self): - super(PAF22Agent, self).destroy() + super(PAFAgent, self).destroy() diff --git a/code/perception/src/position_heading_filter_debug_node.py b/code/perception/src/position_heading_filter_debug_node.py index 4f3e0caf..f43763e1 100755 --- a/code/perception/src/position_heading_filter_debug_node.py +++ b/code/perception/src/position_heading_filter_debug_node.py @@ -40,7 +40,7 @@ def __init__(self): self.control_loop_rate = self.get_param("control_loop_rate", "0.05") # carla attributes - CARLA_HOST = os.environ.get('CARLA_HOST', 'paf23-carla-simulator-1') + CARLA_HOST = os.environ.get('CARLA_HOST', 'paf-carla-simulator-1') CARLA_PORT = int(os.environ.get('CARLA_PORT', '2000')) self.client = carla.Client(CARLA_HOST, CARLA_PORT) self.world = None @@ -189,7 +189,7 @@ def save_position_data(self): """ This method saves the current location errors in a csv file. in the folders of - paf23/doc/06_perception/00_Experiments/kalman_datasets + paf/doc/06_perception/00_Experiments/kalman_datasets It does this for a limited amount of time. """ # stop saving data when max is reached @@ -222,7 +222,7 @@ def save_heading_data(self): """ This method saves the current heading errors in a csv file. in the folders of - paf23/doc/06_perception/00_Experiments/kalman_datasets + paf/doc/06_perception/00_Experiments/kalman_datasets It does this for a limited amount of time. """ # if rospy.get_time() > 45 stop saving data: diff --git a/code/planning/src/local_planner/utils.py b/code/planning/src/local_planner/utils.py index da84a7cd..63cf5600 100644 --- a/code/planning/src/local_planner/utils.py +++ b/code/planning/src/local_planner/utils.py @@ -152,7 +152,7 @@ def spawn_car(distance): Args: distance (float): distance """ - CARLA_HOST = os.environ.get('CARLA_HOST', 'paf23-carla-simulator-1') + CARLA_HOST = os.environ.get('CARLA_HOST', 'paf-carla-simulator-1') CARLA_PORT = int(os.environ.get('CARLA_PORT', '2000')) client = carla.Client(CARLA_HOST, CARLA_PORT) diff --git a/doc/01_general/03_commands.md b/doc/01_general/03_commands.md index 3a731218..e25324e3 100644 --- a/doc/01_general/03_commands.md +++ b/doc/01_general/03_commands.md @@ -1,6 +1,6 @@ # ⌨️ Available commands -A specific `b5` workflow for gpu installation in this project is specified in an issue comment: +A specific `b5` workflow for gpu installation in this project is specified in an issue comment: ## General commands diff --git a/doc/02_development/12_discord_webhook.md b/doc/02_development/12_discord_webhook.md index f9b427f5..b93de316 100644 --- a/doc/02_development/12_discord_webhook.md +++ b/doc/02_development/12_discord_webhook.md @@ -4,6 +4,6 @@ Author: Lennart Luttkus, 15.11.2023 The discord bot has access to the `#gitupdates` text channel on our discord server. It is an Integration as a Webhook. -Settings for this webhook can be found in the repository settings . +Settings for this webhook can be found in the repository settings . The Webhook post updates from the repository in the `#gitupdates` channel. Helpful tutorial can be found here: diff --git a/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md b/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md index b1e76d19..bb372dd6 100644 --- a/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md +++ b/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md @@ -15,7 +15,7 @@ ![Untitled](../../00_assets/research_assets/stanley_controller.png) -### [List of Inputs/Outputs](https://github.com/una-auxme/paf23/blob/main/doc/03_research/01_acting/02_acting_implementation.md#list-of-inputsoutputs) +### [List of Inputs/Outputs](https://github.com/una-auxme/paf/blob/main/doc/03_research/01_acting/02_acting_implementation.md#list-of-inputsoutputs) - Subscribes to: - [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) : to get the current position and heading @@ -25,7 +25,7 @@ - Publishes: - [CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehiclecontrolmsg) : to actually control the vehicles throttle, steering -### [Challenges](https://github.com/una-auxme/paf23/blob/main/doc/03_research/01_acting/02_acting_implementation.md#challenges) +### [Challenges](https://github.com/una-auxme/paf/blob/main/doc/03_research/01_acting/02_acting_implementation.md#challenges) A short list of challenges for the implementation of a basic acting domain and how they these could be tackled based on the requirements mentioned above. diff --git a/doc/03_research/02_perception/05-autoware-perception.md b/doc/03_research/02_perception/05-autoware-perception.md index fa9f8418..42fb6256 100644 --- a/doc/03_research/02_perception/05-autoware-perception.md +++ b/doc/03_research/02_perception/05-autoware-perception.md @@ -2,7 +2,7 @@ ## 1.Architecture -![image](https://github.com/una-auxme/paf23/assets/102369315/6b3fb964-e650-442a-a674-8e0471d931a9) +![image](https://github.com/una-auxme/paf/assets/102369315/6b3fb964-e650-442a-a674-8e0471d931a9) Focus on: @@ -16,7 +16,7 @@ Focus on: Autowares perception is very complex and uses a variety of mechnaism to gather as much information as possible about the surroundings of the car. -![image](https://github.com/una-auxme/paf23/assets/102369315/23f9699e-85c7-44c6-b9fa-a603dc73afcf) +![image](https://github.com/una-auxme/paf/assets/102369315/23f9699e-85c7-44c6-b9fa-a603dc73afcf) For the perception Autoware mainly uses the following Sensors: diff --git a/doc/03_research/02_perception/06_paf_21_1_perception.md b/doc/03_research/02_perception/06_paf_21_1_perception.md index 6d2d0903..4538d028 100644 --- a/doc/03_research/02_perception/06_paf_21_1_perception.md +++ b/doc/03_research/02_perception/06_paf_21_1_perception.md @@ -2,7 +2,7 @@ ## 1. Architecture -![image](https://github.com/una-auxme/paf23/assets/102369315/07328c78-83d7-425c-802e-8cc49430e6c1) +![image](https://github.com/una-auxme/paf/assets/102369315/07328c78-83d7-425c-802e-8cc49430e6c1) ### **Key Features** diff --git a/doc/03_research/02_perception/LIDAR_data.md b/doc/03_research/02_perception/LIDAR_data.md index 528620dc..b55cf7e4 100644 --- a/doc/03_research/02_perception/LIDAR_data.md +++ b/doc/03_research/02_perception/LIDAR_data.md @@ -9,22 +9,22 @@ LIDAR-Data comes in Pointclouds from a specific LIDAR-Topic. `rospy.Subscriber(rospy.get_param('~source_topic', "/carla/hero/LIDAR"), PointCloud2, self.callback)` -Read more about the LIDAR-Sensor [here](https://github.com/una-auxme/paf23/blob/main/doc/06_perception/03_lidar_distance_utility.md) +Read more about the LIDAR-Sensor [here](https://github.com/una-auxme/paf/blob/main/doc/06_perception/03_lidar_distance_utility.md) ## Processing The goal is to identify Objects and their distance. Therefor we need to calculate distances from the pointcloud data. To do this the lidar-distance node first converts pointcloud data to an array, which contains cartesian coordinates. -`paf23-agent-1 | (76.12445 , -1.6572031e+01, 13.737187 , 0.7287409 )` +`paf-agent-1 | (76.12445 , -1.6572031e+01, 13.737187 , 0.7287409 )` -`paf23-agent-1 | (71.9434 , -1.8718828e+01, 13.107929 , 0.7393809 )` +`paf-agent-1 | (71.9434 , -1.8718828e+01, 13.107929 , 0.7393809 )` -`paf23-agent-1 | (-0.3482422 , -1.6367188e-02, -0.20128906, 0.99839103)` +`paf-agent-1 | (-0.3482422 , -1.6367188e-02, -0.20128906, 0.99839103)` -`paf23-agent-1 | (-0.3486328 , -1.4062500e-02, -0.20152344, 0.99838954)` +`paf-agent-1 | (-0.3486328 , -1.4062500e-02, -0.20152344, 0.99838954)` -`paf23-agent-1 | (-0.35070312, -2.3828126e-03, -0.2025 , 0.99838144)` +`paf-agent-1 | (-0.35070312, -2.3828126e-03, -0.2025 , 0.99838144)` The first three values of each row correspon to x, y, z. diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/globals.py b/doc/06_perception/experiments/object-detection-model_evaluation/globals.py index df8a9d5c..5cb86c0a 100644 --- a/doc/06_perception/experiments/object-detection-model_evaluation/globals.py +++ b/doc/06_perception/experiments/object-detection-model_evaluation/globals.py @@ -1,4 +1,4 @@ -IMAGE_BASE_FOLDER = '/home/maxi/paf23/code/output/12-dev/rgb/center' +IMAGE_BASE_FOLDER = '/home/maxi/paf/code/output/12-dev/rgb/center' IMAGES_FOR_TEST = { 'start': '1600.png', diff --git a/pc_setup_user.sh b/pc_setup_user.sh index 7d752f35..7efd778d 100755 --- a/pc_setup_user.sh +++ b/pc_setup_user.sh @@ -1,7 +1,7 @@ cd mkdir git cd git -git clone https://github.com/una-auxme/paf23.git +git clone https://github.com/una-auxme/paf.git -cd paf23 +cd paf ./dc-run-file.sh build/docker-compose.yaml \ No newline at end of file From 1dabc13ab8df59870ca2b4d8751706c493babcda Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 8 Oct 2024 14:46:33 +0200 Subject: [PATCH 10/28] Removed b5 from project --- README.md | 16 +-- build/Taskfile | 76 ------------ build/config.yml | 4 - build/hooks/pre-commit.d/10-flake8.sh | 12 -- build/hooks/pre-commit.d/20-markdown.sh | 12 -- build/tasks/ros.sh | 114 ------------------ doc/01_general/02_installation.md | 46 +------ doc/01_general/03_commands.md | 102 ---------------- doc/01_general/Readme.md | 3 +- doc/02_development/02_linting.md | 2 - .../10_installing_python_packages.md | 12 -- .../14_distributed_simulation.md | 4 +- doc/02_development/installing_cuda.md | 6 - doc/06_perception/01_dataset_generator.md | 8 +- .../07_position_heading_filter_debug_node.md | 2 +- doc/07_planning/01_py_trees.md | 7 +- 16 files changed, 14 insertions(+), 412 deletions(-) delete mode 100644 build/Taskfile delete mode 100644 build/config.yml delete mode 100644 build/hooks/pre-commit.d/10-flake8.sh delete mode 100644 build/hooks/pre-commit.d/20-markdown.sh delete mode 100644 build/tasks/ros.sh delete mode 100644 doc/01_general/03_commands.md diff --git a/README.md b/README.md index b4495249..48c48f93 100644 --- a/README.md +++ b/README.md @@ -23,26 +23,12 @@ As the project is still in early development, these requirements are subject to ## Installation -To run the project you have to install [b5](https://github.com/team23/b5) -and [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, +To run the project you have to install [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). -`b5` is used to simplify some of the docker commands and to provide a more user-friendly interface. `docker` and `nvidia-docker` are used to run the project in a containerized environment with GPU support. -Afterwards, you can set up and execute the project with the following two commands: - -```bash -# Setup project -b5 install - -# Run project -b5 run -``` - More detailed instructions about setup and execution can be found [here](./doc/01_general/Readme.md). -More available b5 commands are documented [here](./doc/01_general/03_commands.md). - ## Development If you contribute to this project please read the guidelines first. They can be found [here](./doc/02_development/Readme.md). diff --git a/build/Taskfile b/build/Taskfile deleted file mode 100644 index c89e1133..00000000 --- a/build/Taskfile +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env bash -# b5 Taskfile, see https://git.team23.de/build/b5 for details - -########################################## -# General commands -########################################## - -task:shell() { - container="$1" - command="$2" - additionalArguments="${@:3}" - docker:container_run "${container:-agent}" "${command:-/bin/bash}" ${additionalArguments:-} -} - -########################################## -# Project setup / maintenance -########################################## -task:install() { - task:install:git_hooks - #task:gitconfig:copy - install:gpu-support - docker:install -} - -install:gpu-support() { - # check if docker-nvidia is installed, to make the project also executable on - # systems without nvidia GPU. - - if [ -z "$(command -v docker-nvidia)" ] - then - echo -e "Juhu! Alles ist richtig installiert für NVIDIA-Support! Hier ein Keks für dich :D" - else - RED='\033[0;31m' - NC='\033[0m' - echo -e "${RED}######################################################################################${NC}" - echo -e "${RED}WARNING: NVIDIA Container Toolkit not installed. The project won't run as expected!${NC}" - echo -e "${RED}#####################################################################################${NC}" - fi -} - -########################################## -# Project linting -########################################## - -task:lint() { - b5 python:lint - b5 markdown:lint -} - -task:python:lint() { - docker:container_run -T flake8 code -} - -task:markdown:lint() { - docker:container_run -T mdlint markdownlint . -} - -task:markdown:fix() { - docker:container_run -T mdlint markdownlint --fix . -} - -task:comlipy() { - docker:container_run -T comlipy -c /apps/build/config-comlipy.yml "$@" -} - -task:install:git_hooks() { - test -L ../.git/hooks/pre-commit || ln -s ../../build/hooks/pre-commit ../.git/hooks/ - test -L ../.git/hooks/commit-msg || ln -s ../../build/hooks/commit-msg ../.git/hooks/ - chmod +x ./hooks/* -} - -task:gitconfig:copy() { - cp -u ~/.gitconfig ../.gitconfig -} - -source ./tasks/ros.sh diff --git a/build/config.yml b/build/config.yml deleted file mode 100644 index 648c6c08..00000000 --- a/build/config.yml +++ /dev/null @@ -1,4 +0,0 @@ -# b5 config file -modules: - template: - docker: diff --git a/build/hooks/pre-commit.d/10-flake8.sh b/build/hooks/pre-commit.d/10-flake8.sh deleted file mode 100644 index 237a5180..00000000 --- a/build/hooks/pre-commit.d/10-flake8.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/sh - -python_changed=0 -FILE_PATTERN=\.py$ - -git diff --cached --name-only | grep -q $FILE_PATTERN && python_changed=1 - -if [ $python_changed = 1 ]; then - b5 python:lint -else - echo "No python files in commit, skip python linting" -fi \ No newline at end of file diff --git a/build/hooks/pre-commit.d/20-markdown.sh b/build/hooks/pre-commit.d/20-markdown.sh deleted file mode 100644 index 13a93af0..00000000 --- a/build/hooks/pre-commit.d/20-markdown.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/sh - -markdown_changed=0 -FILE_PATTERN=\.md$ - -git diff --cached --name-only | grep -q $FILE_PATTERN && markdown_changed=1 - -if [ $markdown_changed = 1 ]; then - b5 lint -else - echo "No markdown files in commit, skip markdown linting" -fi \ No newline at end of file diff --git a/build/tasks/ros.sh b/build/tasks/ros.sh deleted file mode 100644 index bea9d749..00000000 --- a/build/tasks/ros.sh +++ /dev/null @@ -1,114 +0,0 @@ -#!/usr/bin/env bash -# b5 Taskfile, see https://git.team23.de/build/b5 for details - -# shortcuts for commands documented here -# http://wiki.ros.org/ROS/CommandLineTools#Common_user_tools - -task:roscommand() { - # seems necessary to source the setup file on every call - docker:container_run agent /bin/bash -c "source /opt/ros/noetic/setup.bash && ${@}" -} - -task:rosbag() { - task:roscommand "rosbag ${@}" -} - -task:ros_readbagfile() { - task:roscommand "ros_readbagfile ${@}" -} - -task:rosbash() { - task:roscommand "rosbash ${@}" -} - -task:roscd() { - task:roscommand "roscd ${@}" -} - -task:rosclean() { - task:roscommand "rosclean ${@}" -} - -task:roscore() { - task:roscommand "roscore ${@}" -} - -task:rosdep() { - task:roscommand "rosdep ${@}" -} - -task:rosed() { - task:roscommand "rosed ${@}" -} - -task:roscreate-pkg() { - task:roscommand "roscreate-pkg ${@}" -} - -task:roscreate-stack() { - task:roscommand "roscreate-stack ${@}" -} - -task:rosrun() { - task:roscommand "rosrun ${@}" -} - -task:roslaunch() { - task:roscommand "roslaunch ${@}" -} - -task:roslocate() { - task:roscommand "roslocate ${@}" -} - -task:rosmake() { - task:roscommand "rosmake ${@}" -} - -task:rosmsg() { - task:roscommand "rosmsg ${@}" -} - -task:rosnode() { - additionalArguments="${@:1}" - task:roscommand "rosnode ${@}" -} - -task:rospack() { - task:roscommand "rospack ${@}" -} - -task:rosparam() { - task:roscommand "rosparam ${@}" -} - -task:rossrv() { - task:roscommand "rossrv ${@}" -} - -task:rosservice() { - task:roscommand "rosservice ${@}" -} - -task:rosstack() { - task:roscommand "rosstack ${@}" -} - -task:rostopic() { - task:roscommand "rostopic ${@}" -} - -task:rosversion() { - task:roscommand "rosversion ${@}" -} -task:rqt_graph() { - task:roscommand "rqt_graph ${@}" -} - -task:rqt_plot() { - task:roscommand "rqt_plot ${@}" -} - -task:rqt_topic() { - task:roscommand "rqt_topic ${@}" -} diff --git a/doc/01_general/02_installation.md b/doc/01_general/02_installation.md index bd42665b..a0a1d332 100644 --- a/doc/01_general/02_installation.md +++ b/doc/01_general/02_installation.md @@ -1,38 +1,12 @@ # 🛠️ Installation -To run the project you have to install [b5](https://github.com/team23/b5) and [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). +To run the project you have to install [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). -For development, we further recommend PyCharm Professional. More information about its installation can be found in [PyCharm Setup](../02_development/06_pycharm_setup.md) +For development, we recommend Visual Studio Code with the plugins that are recommended inside the `.vscode` folder. ## Installation -If not yet installed first install b5 and docker as described in section [b5 installation](#b5-installation) and [Docker with NVIDIA GPU support](#docker-with-nvidia-gpu-support). - -After that setting up the project and executing it is as easy as that: - -```shell -# Setup project -b5 install - -# Run project -b5 run -``` - -## b5 installation - -Make sure you have installed python and pip. If not yet installed, you can do by the following (Ubuntu): - -```shell -# Install python3 -sudo apt install python3 -``` - -Afterwards just install b5 by running: - -```shell -# Install b5 -pip install b5 -``` +If not yet installed first install docker as described in section [Docker with NVIDIA GPU support](#docker-with-nvidia-gpu-support). ## Docker with NVIDIA GPU support @@ -90,20 +64,6 @@ sudo systemctl restart docker ## 🚨 Common Problems -### `b5: command not found` - ---- - -1. If you already installed b5 (`pip install b5`) try to log out and log back in. - -2. If that doesn't help add this line to your `~/.bash_profile` or `~/.bashrc`: - - ```shell - export PATH=$PATH:$HOME/.local/bin - ``` - -([possible reason](https://stackoverflow.com/a/73256004)) - ### Vulkan device not available Cannot find a compatible Vulkan Device. diff --git a/doc/01_general/03_commands.md b/doc/01_general/03_commands.md deleted file mode 100644 index e25324e3..00000000 --- a/doc/01_general/03_commands.md +++ /dev/null @@ -1,102 +0,0 @@ -# ⌨️ Available commands - -A specific `b5` workflow for gpu installation in this project is specified in an issue comment: - -## General commands - -### `b5 run` - -Starts the Project (docker-compose up). - -### `b5 halt` - -Stops the Project (docker-compose down). - -### `b5 shell` - -Makes it possible to get a shell of any docker container contained in the project. - -Possible arguments: - -| `argument` | `description` | `optional` | `default` | -|-------------|------------------------|------------|-----------| -| `container` | Container name | True | flake8 | -| `command` | Command to be executed | True | | - -Usage: `b5 shell ` - -#### Examples - -```shell -# Execute flake8 lint in `components`-folder: -b5 shell flake8 components - -# Get Shell in perception container (hypothetic example) -b5 shell perception -``` - -## Project setup / maintenance - -### `b5 install` - -Setup the project. Has to be run after cloning the project. - -### `b5 update` - -Update the project. - -## Project linting - -### `b5 lint` - -Runs the project linters. More documentation about linting can be found [here](../02_development/02_linting.md). - -### `b5 python:lint` - -Runs the python linter. More documentation about linting can be found [here](../02_development/02_linting.md). - -### `b5 markdown:lint` - -Runs the markdown linter. More documentation about linting can be found [here](../02_development/02_linting.md). - -## Shortcuts for ROS - -Shortcuts to run the ROS commands directly in the container. Detailed documentation -about this commands can be found [here](http://wiki.ros.org/ROS/CommandLineTools#Common_user_tools). - -For more complex tasks it's easier to just get a shell into the container with `b5 shell` and run the commands there. - -`b5 rosbag` -`b5 ros_readbagfile` -`b5 rosbash` -`b5 roscd` -`b5 rosclean` -`b5 roscore` -`b5 rosdep` -`b5 rosed` -`b5 roscreate-pkg` -`b5 roscreate-stack` -`b5 rosrun` -`b5 roslaunch` -`b5 roslocate` -`b5 rosmake` -`b5 rosmsg` -`b5 rosnode` -`b5 rospack` -`b5 rosparam` -`b5 rossrv` -`b5 rosservice` -`b5 rosstack` -`b5 rostopic` -`b5 rosversion` -`b5 rqt_graph` - -## 🚨 Common Problems - -` -REQUIRED process [carla_ros_bridge-1] has died! -` - -If the execution of `b5 run` is stopping because of this error the reason might be a duplicate Carla ROS bridge. - -To eliminate this problem, run `b5 halt --remove-orphans`. diff --git a/doc/01_general/Readme.md b/doc/01_general/Readme.md index de254e8f..313b5e96 100644 --- a/doc/01_general/Readme.md +++ b/doc/01_general/Readme.md @@ -3,5 +3,4 @@ This Folder contains instruction how to execute the project and what it does. 1. [Installation](./02_installation.md) -2. [Available b5 commands](./03_commands.md) -3. [Current architecture of the agent](./04_architecture.md) +2. [Current architecture of the agent](./04_architecture.md) diff --git a/doc/02_development/02_linting.md b/doc/02_development/02_linting.md index ab42b32f..728a1500 100644 --- a/doc/02_development/02_linting.md +++ b/doc/02_development/02_linting.md @@ -18,8 +18,6 @@ To enforce unified standards in all python files, we use [Flake8](https://pypi.o To enforce unified standards in all markdown files, we use [markdownlint-cli](https://github.com/igorshubovych/markdownlint-cli). More details on it can be found in the according documentation. -The markdown linter can fix some errors on its own by executing `b5 markdown:fix`. - ## 🚨 Common Problems Currently, we are not aware about any Problems. diff --git a/doc/02_development/10_installing_python_packages.md b/doc/02_development/10_installing_python_packages.md index 445cfe9c..7cb876a6 100644 --- a/doc/02_development/10_installing_python_packages.md +++ b/doc/02_development/10_installing_python_packages.md @@ -28,15 +28,3 @@ An example how this file could look like is given below: torch==1.13.0 torchvision==0.1.9 ``` - -To install the added packages run `b5 install` afterwards. - -## Common Problems - -Sometimes, PyCharm does not recognize installed packages on the docker container. -This leads to the problem that the program cannot be started in PyCharm via the run button, but only via command line. - -A workaround for this problem is: - -1. Run ```docker compose build``` in the console in the build folder. -2. Click on the python interpreter in the lower right corner and reselect it. diff --git a/doc/02_development/14_distributed_simulation.md b/doc/02_development/14_distributed_simulation.md index 75a981d7..80dc7692 100644 --- a/doc/02_development/14_distributed_simulation.md +++ b/doc/02_development/14_distributed_simulation.md @@ -51,8 +51,8 @@ Replace the ip-address in the following files: ### Start the agent on your local machine ```bash -b5 run_distributed -b5 run_dev_distributed +docker compose -f build/docker-compose_distributed.yaml up +docker compose -f build/docker-compose_dev_distributed.yaml up ``` ## How do you know that you do not have enough compute resources? diff --git a/doc/02_development/installing_cuda.md b/doc/02_development/installing_cuda.md index 0daa0002..832325b1 100644 --- a/doc/02_development/installing_cuda.md +++ b/doc/02_development/installing_cuda.md @@ -17,7 +17,6 @@ Marco Riedenauer ## First install For execution of the program, cuda-toolkit v11.7 has to be installed on both, your computer and the docker container. -Cuda-toolkit should already be installed on the docker container by executing ```b5 install``` in your build folder: For installing cuda-toolkit on your computer, execute step-by-step the following commands in your command line: @@ -51,8 +50,3 @@ the installer outputs that already another version of cuda-toolkit is installed, you have to uninstall the old version first. This can be done by executing the file `cuda-uninstaller` in the installation folder, usually `/usr/local/cuda-x.y/bin`. - -### Executing b5 install/update leads to an error of incompatible nvcc and drivers - -I had this problem after reinstalling cuda-toolkit on my computer. The best workaround I found is to uninstall all -NVIDIA drivers and cuda-toolkit and reinstall both of them. diff --git a/doc/06_perception/01_dataset_generator.md b/doc/06_perception/01_dataset_generator.md index ad688f7d..e1a71aff 100644 --- a/doc/06_perception/01_dataset_generator.md +++ b/doc/06_perception/01_dataset_generator.md @@ -81,11 +81,9 @@ index d1ae1df..ef1b503 100644 To run the dataset generator, first the Carla Simulator has to be running: - ```bash - b5 run carla-simulator - ``` +Start the docker containr `leaderboard-2.0`. -You can then run the dataset generator by executing the following command in the `b5 shell`: +You can then run the dataset generator by executing the following command in an attached shell: ```bash python3 perception/src/dataset_generator.py --host carla-simulator --port 2000 --use-empty-world @@ -119,7 +117,7 @@ the # ... ``` -Once the leaderboard evaluator is running, you can start the dataset generator in the `b5 shell`: +Once the leaderboard evaluator is running, you can start the dataset generator in an attached shell: ```bash python3 perception/src/dataset_generator.py --host carla-simulator --port 2000 diff --git a/doc/06_perception/07_position_heading_filter_debug_node.md b/doc/06_perception/07_position_heading_filter_debug_node.md index fd1da95c..34f7ec2c 100644 --- a/doc/06_perception/07_position_heading_filter_debug_node.md +++ b/doc/06_perception/07_position_heading_filter_debug_node.md @@ -183,7 +183,7 @@ To be able to save data in csv files you just need to uncomment the saving metho To use the [viz.py](../../code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py) file you will have to: 1. Configure the main method to your likings inside the viz.py: ![picture](/doc/00_assets/perception/sensor_debug_viz_config.png) -2. Open up the b5 shell typing ```b5 shell``` into the terminal +2. Open up an attached shell 3. Navigate to the code/perception/src/00_Experiments/Position_Heading folder using ```cd``` 4. run the viz.py using ```python viz.py``` diff --git a/doc/07_planning/01_py_trees.md b/doc/07_planning/01_py_trees.md index 94cab5db..5d21de8c 100644 --- a/doc/07_planning/01_py_trees.md +++ b/doc/07_planning/01_py_trees.md @@ -45,10 +45,9 @@ There is a very simple example for pytrees. Run: -1. call `b5 update` to update docker container -2. call `b5 run` to start container -3. in a second shell call `b5 shell` -4. run `py-trees-demo-behaviour-lifecycle` to execute the example +1. Start the dev container for the agent +2. Attach a shell to the container +3. run `py-trees-demo-behaviour-lifecycle` to execute the example ## Common commands From 6c9b3da4bcb8475170e0f2a0822ba638f92660d0 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 8 Oct 2024 15:11:01 +0200 Subject: [PATCH 11/28] Removed PyCharm & changed docstring settings --- .markdownlint.yaml | 2 +- build/docker/agent/Dockerfile | 2 +- build/docker/agent/Dockerfile_Submission | 2 +- doc/02_development/04_coding_style.md | 4 - doc/02_development/06_pycharm_setup.md | 81 ------------------- doc/02_development/08_project_management.md | 6 -- .../templates/template_class.py | 24 ++---- .../templates/template_wiki_page.md | 36 +++------ 8 files changed, 21 insertions(+), 136 deletions(-) delete mode 100644 doc/02_development/06_pycharm_setup.md diff --git a/.markdownlint.yaml b/.markdownlint.yaml index b49b73db..fe64f4e8 100755 --- a/.markdownlint.yaml +++ b/.markdownlint.yaml @@ -6,7 +6,7 @@ MD013: tables: false MD004: - style: "consistent" + style: dash MD051: false diff --git a/build/docker/agent/Dockerfile b/build/docker/agent/Dockerfile index 107ece04..1917b372 100644 --- a/build/docker/agent/Dockerfile +++ b/build/docker/agent/Dockerfile @@ -139,7 +139,7 @@ ENV CARLA_SIM_HOST=localhost ENV CARLA_SIM_WAIT_SECS=15 ENV SCENARIO_RUNNER_PATH=/opt/scenario_runner -# setup python path for PyCharm integration +# setup python path RUN echo /catkin_ws/install/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ echo /catkin_ws/devel/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ echo /opt/ros/noetic/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ diff --git a/build/docker/agent/Dockerfile_Submission b/build/docker/agent/Dockerfile_Submission index a329247e..128a8bd8 100644 --- a/build/docker/agent/Dockerfile_Submission +++ b/build/docker/agent/Dockerfile_Submission @@ -141,7 +141,7 @@ ENV CARLA_SIM_HOST=localhost ENV CARLA_SIM_WAIT_SECS=15 ENV SCENARIO_RUNNER_PATH=/opt/scenario_runner -# setup python path for PyCharm integration +# setup python path RUN echo /catkin_ws/install/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ echo /catkin_ws/devel/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ echo /opt/ros/noetic/lib/python3/dist-packages >> /home/$USERNAME/.local/lib/python3.8/site-packages/carla.pth && \ diff --git a/doc/02_development/04_coding_style.md b/doc/02_development/04_coding_style.md index 0455af0c..390d6859 100644 --- a/doc/02_development/04_coding_style.md +++ b/doc/02_development/04_coding_style.md @@ -20,10 +20,6 @@ VSCode Extensions: - autoDostring - Python Docstring Generator by Nils Werner -To get the ReST format like in Pycharm: - -- Go to Extension setting and change it under `Auto Doctring:Docstring Format` to `sphinx-notypes` - --- - [Coding style guidelines](#coding-style-guidelines) diff --git a/doc/02_development/06_pycharm_setup.md b/doc/02_development/06_pycharm_setup.md deleted file mode 100644 index 7dc4e8d1..00000000 --- a/doc/02_development/06_pycharm_setup.md +++ /dev/null @@ -1,81 +0,0 @@ -# PyCharm Professional - -(Kept from previous group [paf22]) - -For a seamless development experience, we recommend the use of [PyCharm Professional](https://www.jetbrains.com/pycharm/). - -## Getting an education license - -To use PyCharm Professional, you need a license. -Fortunately, all students of Uni-Augsburg can get a free education license using their @uni-a.de mail-address. - -For this, follow the process on the Jetbrains website: [Request Education License](https://www.jetbrains.com/shop/eform/students). - -After completing this process, you can continue to install PyCharm Professional - -## Installing PyCharm professional - -### Jetbrains Toolbox - -The easiest way to install PyCharm Professional and keep it up to date, is to use [Jetbrains Toolbox](https://www.jetbrains.com/toolbox-app/). - -For easy installation, there is a [convenience script](https://github.com/nagygergo/jetbrains-toolbox-install), -that downloads JetBrains toolbox and installs it to the right folder. - -```shell -sudo curl -fsSL https://raw.githubusercontent.com/nagygergo/jetbrains-toolbox-install/master/jetbrains-toolbox.sh | bash -``` - -After this you can open the toolbox with the following command: - -```shell -jetbrains-toolbox -``` - -The interface should open, and you can easily install _PyCharm Professional_. - -### Setting up docker-compose standalone binary - -To use the docker-compose integration of PyCharm Professional, -you additionally need to install the standalone version of [docker-compose](https://docs.docker.com/compose/install/other/) - -```shell -# Download binary -sudo curl -SL https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose - -# Make binary executable -sudo chmod +x /usr/local/bin/docker-compose - -# Create symbolic link to make the binary discoverable -sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose - -``` - -### Setting up the paf22 project with docker-compose interpreter - -After opening and activating PyCharm Professional with your education license, open the existing paf22 project-folder in PyCharm Professional. - -The last step is to set up the docker-compose integration. -For this, please follow this [official guide](https://www.jetbrains.com/help/pycharm/using-docker-compose-as-a-remote-interpreter.html#docker-compose-remote), while selecting `./build/docker-compose.yml` as configuration file and `agent` as service. - -After the initial indexing, PyCharm will provide intelligent code feedback and refactoring options in Python. - -## 🚨 Common Problems - -* Error when committing via PyCharm (error message may vary): - - ```shell - ... - .git/hooks/pre-commit: 9: ././build/hooks/pre-commit.d/20-markdown.sh: b5: not found - ``` - - This may happen if you installed b5 in your conda environment instead of the native one. - To fix this, install b5 in your native environment: - - ```shell - conda deactivate - sudo apt-get install python pip - pip install b5 - ``` - - After that, the commit should work! diff --git a/doc/02_development/08_project_management.md b/doc/02_development/08_project_management.md index c82c2035..97821432 100644 --- a/doc/02_development/08_project_management.md +++ b/doc/02_development/08_project_management.md @@ -86,12 +86,6 @@ CARLA simulator crashes on startup on your machine. To create a pull request, go to the [branches overview](https://github.com/ll7/paf22/branches) and select ``New Pull Request`` for the branch you want to create a PR for. ![img.png](../00_assets/branch_overview.png) -Alternatively you can create a PR directly from PyCharm using the ``Pull Request`` tab on the sidebar. - -![img.png](../00_assets/Pycharm_PR.png) - -For completing the pull request, fill out the template that opens up automatically. - Merge the pull request after the review process is complete and all the feedback from the reviewer has been worked in. For more information about the review process, see [Review process](./07_review_guideline.md). diff --git a/doc/02_development/templates/template_class.py b/doc/02_development/templates/template_class.py index 9db00607..164c3585 100644 --- a/doc/02_development/templates/template_class.py +++ b/doc/02_development/templates/template_class.py @@ -71,29 +71,15 @@ def test_function3(self): # inline comment # 6. Docstrings # ############################# def test_function4(self, param1, param2): - # This docstring style is supported by Sphinx and helps with automated API documentation creation, automatically created by PyCharm - """ - This is the description of the function. + # This docstring style is the default google style of the autoDocstring extension and helps with automated API documentation creation + """This is the description of the function. - :param param1: first parameter - :param param2: second parameter - :return: return value(s) + Args: + param1 (_type_): _description_ + param2 (_type_): _description_ """ pass - def test_function5(self, param1, param2): - # This docstring style is supported by Sphinx and helps with automated API documentation creation, automatically created by VSCode extension autoDocstring - # VSCode Extentsion: autoDocstring- Python Docstring Generator by Nils Werner - # To get the ReST format like in Pycharm - # Go to Extension setting and change it under `Auto Doctring:Docstring Format` to `sphinx-notypes` - """_summary_ - - :param param1: _description_ - :param param2: _description_ - :return: _description_ - """ - return param1 - # main function of the class def main(self): print("Hello World") diff --git a/doc/02_development/templates/template_wiki_page.md b/doc/02_development/templates/template_wiki_page.md index 68deb358..9ef7fd3c 100644 --- a/doc/02_development/templates/template_wiki_page.md +++ b/doc/02_development/templates/template_wiki_page.md @@ -20,34 +20,24 @@ VSCode Extensions: --- -How to generate a TOC in VSCode and Pycharm: +How to generate a TOC in VSCode: VSCode: -1. Install Markdown All in One via Extensions -2. ``Ctrl+Shift+P`` -3. Command "Create Table of Contents" - -Cosmetic change: Markdown All in One uses `-` as unordered list indicator, to change it to `*` like in Pycharm - -Go to Extension setting and change it under `Markdown>Extension>Toc>Unordered List:Marker` - -Pycharm: - -1. ``Alt+Ins`` -2. Select Table of Contents -3. To update Table of Contents follow Step 1. and select Update Table of Contents +1. ``Ctrl+Shift+P`` +2. Command "Create Table of Contents" -* [Title of wiki page](#title-of-wiki-page) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Cheat Sheet](#cheat-sheet) - * [Basics](#basics) - * [Extended](#extended) - * [more Content](#more-content) - * [Sources](#sources) +- [Title of wiki page](#title-of-wiki-page) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Cheat Sheet](#cheat-sheet) + - [Basics](#basics) + - [Extended](#extended) + - [My Great Heading {#custom-id}](#my-great-heading-custom-id) + - [more Content](#more-content) + - [Sources](#sources) ## Cheat Sheet From 87178de2b7c5f4b1a7f14b49b74a94175c6f8dbd Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 8 Oct 2024 15:22:44 +0200 Subject: [PATCH 12/28] Changed docs to adhere to new markdown linting --- .../templates/template_wiki_page.md | 14 +- .../templates/template_wiki_page_empty.md | 18 +- doc/03_research/01_acting/01_basics_acting.md | 100 ++--- .../01_acting/02_implementation_acting.md | 56 +-- .../01_acting/03_paf21_1_acting.md | 30 +- .../01_acting/05_autoware_acting.md | 30 +- doc/03_research/01_acting/Readme.md | 14 +- .../03_first_implementation_plan.md | 67 +-- doc/03_research/02_perception/Readme.md | 16 +- .../03_planning/00_paf22/02_basics.md | 242 +++++------ .../03_planning/00_paf22/03_Implementation.md | 73 ++-- .../00_paf22/04_decision_making.md | 214 +++++----- .../00_paf22/05_Navigation_Data.md | 38 +- .../00_paf22/06_state_machine_design.md | 131 +++--- .../03_planning/00_paf22/07_OpenDrive.md | 232 +++++------ .../07_reevaluation_desicion_making.md | 49 ++- doc/03_research/03_planning/Readme.md | 4 +- .../02_informations_from_leaderboard.md | 118 +++--- .../04_requirements/03_requirements.md | 69 ++-- .../04_requirements/04_use_cases.md | 386 ++++++++++++------ doc/03_research/04_requirements/Readme.md | 6 +- doc/03_research/Readme.md | 8 +- doc/06_perception/02_dataset_structure.md | 20 +- .../03_lidar_distance_utility.md | 10 +- doc/06_perception/04_efficientps.md | 44 +- 25 files changed, 1068 insertions(+), 921 deletions(-) diff --git a/doc/02_development/templates/template_wiki_page.md b/doc/02_development/templates/template_wiki_page.md index 9ef7fd3c..8679286f 100644 --- a/doc/02_development/templates/template_wiki_page.md +++ b/doc/02_development/templates/template_wiki_page.md @@ -16,7 +16,7 @@ Josef Kircher VSCode Extensions: -* Markdown All in One by Yu Zhang (for TOC) +- Markdown All in One by Yu Zhang (for TOC) --- @@ -74,9 +74,9 @@ Ordered List --- Unordered List -* First item -* Second item -* Third item +- First item +- Second item +- Third item --- Code @@ -142,10 +142,10 @@ Strikethrough Task List -* [x] Write the press release -* [ ] Update the website +- [x] Write the press release +- [ ] Update the website -* [ ] Contact the media +- [ ] Contact the media --- diff --git a/doc/02_development/templates/template_wiki_page_empty.md b/doc/02_development/templates/template_wiki_page_empty.md index bd0eb1ff..2992fd64 100644 --- a/doc/02_development/templates/template_wiki_page_empty.md +++ b/doc/02_development/templates/template_wiki_page_empty.md @@ -16,18 +16,20 @@ Josef Kircher VSCode Extensions: -* Markdown All in One by Yu Zhang (for TOC) +- Markdown All in One by Yu Zhang (for TOC) --- + -* [Title of wiki page](#title-of-wiki-page) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Some Content](#some-content) - * [more Content](#more-content) - * [Sources](#sources) +- [Title of wiki page](#title-of-wiki-page) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Some Content](#some-content) + - [more Content](#more-content) + - [Sources](#sources) + ## Some Content ## more Content diff --git a/doc/03_research/01_acting/01_basics_acting.md b/doc/03_research/01_acting/01_basics_acting.md index deeab2e1..1b6b41f2 100644 --- a/doc/03_research/01_acting/01_basics_acting.md +++ b/doc/03_research/01_acting/01_basics_acting.md @@ -19,38 +19,38 @@ Gabriel Schwald, Julian Graf The job of this domain is to translate a preplanned trajectory into actual steering controls for the vehicle. -* safety: - * never exceeding vehicle limits - * never exceeding speed limits - * never leaf path -* driving comfort? +- safety: + - never exceeding vehicle limits + - never exceeding speed limits + - never leaf path +- driving comfort? ## Solutions from old PAF projects ### [Paf 20/1](https://github.com/ll7/psaf1/tree/master/psaf_ros/psaf_steering) -* [carla_ackermann_control](https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_ackermann_control/) modified for [twist-msgs](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Twist.html) -* input: [twist-msgs](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Twist.html) (for velocity) -* velocity control: PID -* lateral control: PD (heading error) +- [carla_ackermann_control](https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_ackermann_control/) modified for [twist-msgs](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Twist.html) +- input: [twist-msgs](http://docs.ros.org/en/noetic/api/geometry_msgs/html/msg/Twist.html) (for velocity) +- velocity control: PID +- lateral control: PD (heading error) ### [Paf 21/1](https://github.com/ll7/paf21-1/wiki/Vehicle-Controller) -* input: waypoints -* curve detection: returns distance to next curve -* calculation of max curve speed as sqrt(friction_coefficient x gravity_accel x radius) -* in Curve: [naive Controller](###Pure_Pursuit) -* on straights: [Stanley Controller](###Stanley) -* interface to rosbridge +- input: waypoints +- curve detection: returns distance to next curve +- calculation of max curve speed as sqrt(friction_coefficient x gravity_accel x radius) +- in Curve: [naive Controller](###Pure_Pursuit) +- on straights: [Stanley Controller](###Stanley) +- interface to rosbridge ### [Paf 20/2](https://github.com/ll7/psaf2) and [Paf 21/2](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#readme) -* input: odometry(position and velocity with uncertainty), local path -* lateral: [Stanley Controller](###Stanley) -* speed controller: pid -* ACC (Adaptive Cruise Control): (speed, distance) -> PID -* Unstuck-Routine (drive backwards) -* Emergency Modus: fastest possible braking ([Tests](https://github.com/ll7/paf21-2/blob/main/docs/paf_actor/backwards/braking.md) -> handbrake with throttle, 30° steering and reverse) +- input: odometry(position and velocity with uncertainty), local path +- lateral: [Stanley Controller](###Stanley) +- speed controller: pid +- ACC (Adaptive Cruise Control): (speed, distance) -> PID +- Unstuck-Routine (drive backwards) +- Emergency Modus: fastest possible braking ([Tests](https://github.com/ll7/paf21-2/blob/main/docs/paf_actor/backwards/braking.md) -> handbrake with throttle, 30° steering and reverse) ## Lateral control @@ -87,11 +87,11 @@ $$ \delta(t) = arctan(2L*\frac{sin(\alpha)}{K_d*v}) $$ -* simple controller -* ignores dynamic forces -* assumes no-slip condition -* possible improvement: vary the look-ahead distance based on vehicle velocity -* not really suited for straights, because ICR moves towards infinity this case +- simple controller +- ignores dynamic forces +- assumes no-slip condition +- possible improvement: vary the look-ahead distance based on vehicle velocity +- not really suited for straights, because ICR moves towards infinity this case ### Stanley @@ -118,7 +118,7 @@ The basic idea of MPC is to model the future behavior of the vehicle and compute ![MPC Controller](../../00_assets/research_assets/mpc.png) *source: [[5]](https://dingyan89.medium.com/three-methods-of-vehicle-lateral-control-pure-pursuit-stanley-and-mpc-db8cc1d32081)* -* cost function can be designed to account for driving comfort +- cost function can be designed to account for driving comfort ### [SMC](https://en.wikipedia.org/wiki/Sliding_mode_control) (sliding mode control) @@ -128,10 +128,10 @@ Real implementations of sliding mode control approximate theoretical behavior wi ![chattering](../../00_assets/research_assets/chattering.gif) *source: [[9]](https://ieeexplore.ieee.org/document/1644542)* -* simple -* robust -* stabile -* disadvantage: chattering -> controller is ill-suited for this application +- simple +- robust +- stabile +- disadvantage: chattering -> controller is ill-suited for this application Sources: @@ -155,20 +155,20 @@ PID: already implemented in [ROS](http://wiki.ros.org/pid) (and [CARLA](https:// Further information: -* +- ## Interface **subscribes** to: -* current position +- current position ([nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html)) from Perception? -* path ([nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html)) or target point ([geometry_msgs/Pose.msg](https://docs.ros.org/en/api/geometry_msgs/html/msg/Pose.html)) -* (maximal) velocity to drive -* (distance and speed of vehicle to follow) -* (commands for special routines) -* (Distance to obstacles for turning/min turning radius) -* (Road conditions) +- path ([nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html)) or target point ([geometry_msgs/Pose.msg](https://docs.ros.org/en/api/geometry_msgs/html/msg/Pose.html)) +- (maximal) velocity to drive +- (distance and speed of vehicle to follow) +- (commands for special routines) +- (Distance to obstacles for turning/min turning radius) +- (Road conditions) **publishes**: [CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehiclecontrolmsg) or [ackermann_msgs/AckermannDrive.msg](https://docs.ros.org/en/api/ackermann_msgs/html/msg/AckermannDrive.html) @@ -177,10 +177,10 @@ Further information: In the [CarlaEgoVehicleInfo.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehicleinfomsg) we get a [CarlaEgoVehicleInfoWheel.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehicleinfowheelmsg) which provides us with -* tire_friction (a scalar value that indicates the friction of the wheel) -* max_steer_angle (the maximum angle in degrees that the wheel can steer) -* max_brake_torque (the maximum brake torque in Nm) -* max_handbrake_torque (the maximum handbrake torque in Nm) +- tire_friction (a scalar value that indicates the friction of the wheel) +- max_steer_angle (the maximum angle in degrees that the wheel can steer) +- max_brake_torque (the maximum brake torque in Nm) +- max_handbrake_torque (the maximum handbrake torque in Nm) The max curve speed can be calculated as sqrt(**friction_coefficient** *gravity_accel* curve_radius). @@ -193,12 +193,12 @@ For debugging purposes the vehicles path can be visualized using [carlaviz](http ## Additional functionality (open for discussion) -* ACC (Adaptive Cruise Control): reduces speed to keep set distance to vehicle in front (see also [cruise control technology review](https://www.sciencedirect.com/science/article/pii/S004579069700013X), +- ACC (Adaptive Cruise Control): reduces speed to keep set distance to vehicle in front (see also [cruise control technology review](https://www.sciencedirect.com/science/article/pii/S004579069700013X), [a comprehensive review of the development of adaptive cruise control systems](https://www.researchgate.net/publication/245309633_A_comprehensive_review_of_the_development_of_adaptive_cruise_control_systems), [towards an understanding of adaptive cruise control](https://www.sciencedirect.com/science/article/pii/S0968090X0000022X), [Encyclopedia of Systems and Control](https://dokumen.pub/encyclopedia-of-systems-and-control-2nd-ed-2021-3030441830-9783030441838.html)) -* emergency braking: stops the car as fast as possible -* emergency braking assistant: uses Lidar as proximity sensor and breaks if it would come to a collision without breaking -* parallel parking: executes [fixed parking sequence](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5705869) to parallel park vehicle in given parking space -* U-Turn: performs u-turn -* Driving backwards: might a need different controller configuration -* Unstuck routine: performs fixed routine (e.g. driving backwards) if the car hasn't moved in a while +- emergency braking: stops the car as fast as possible +- emergency braking assistant: uses Lidar as proximity sensor and breaks if it would come to a collision without breaking +- parallel parking: executes [fixed parking sequence](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5705869) to parallel park vehicle in given parking space +- U-Turn: performs u-turn +- Driving backwards: might a need different controller configuration +- Unstuck routine: performs fixed routine (e.g. driving backwards) if the car hasn't moved in a while diff --git a/doc/03_research/01_acting/02_implementation_acting.md b/doc/03_research/01_acting/02_implementation_acting.md index 0d7a216e..dd7b45d5 100644 --- a/doc/03_research/01_acting/02_implementation_acting.md +++ b/doc/03_research/01_acting/02_implementation_acting.md @@ -15,14 +15,14 @@ Gabriel Schwald --- -* [Requirements and challenges for an acting implementation](#requirements-and-challenges-for-an-acting-implementation) - * [Authors](#authors) - * [Date](#date) - * [Planned basic implementation of the Acting domain](#planned-basic-implementation-of-the-acting-domain) - * [List of basic functions](#list-of-basic-functions) - * [List of Inputs/Outputs](#list-of-inputsoutputs) - * [Challenges](#challenges) - * [Next steps](#next-steps) +- [Requirements and challenges for an acting implementation](#requirements-and-challenges-for-an-acting-implementation) + - [Authors](#authors) + - [Date](#date) + - [Planned basic implementation of the Acting domain](#planned-basic-implementation-of-the-acting-domain) + - [List of basic functions](#list-of-basic-functions) + - [List of Inputs/Outputs](#list-of-inputsoutputs) + - [Challenges](#challenges) + - [Next steps](#next-steps) This document sums up all functions already agreed upon in [#24](https://github.com/ll7/paf22/issues/24) regarding [acting](../01_acting/01_acting.md), that could be implemented in the next sprint. @@ -36,34 +36,34 @@ These goals lead to the following requirements: ## List of basic functions -* Longitudinal control - * PID controller -* Lateral control - * Pure Pursuit controller - * Stanley controller +- Longitudinal control + - PID controller +- Lateral control + - Pure Pursuit controller + - Stanley controller ## List of Inputs/Outputs -* Subscribes to: - * [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) : to get the current position and heading - * [nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html) : to get the current trajectory - * emergency breaking msg : to initiate emergency breaking - * speed limit msg : to get the maximum velocity -* Publishes: - * [CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehiclecontrolmsg) : to actually control the vehicles throttle, steering, ... +- Subscribes to: + - [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) : to get the current position and heading + - [nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html) : to get the current trajectory + - emergency breaking msg : to initiate emergency breaking + - speed limit msg : to get the maximum velocity +- Publishes: + - [CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehiclecontrolmsg) : to actually control the vehicles throttle, steering, ... ## Challenges A short list of challenges for the implementation of a basic acting domain and how they these could be tackled based on the requirements mentioned above. -* The vehicle needs to know its own position => [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) or [GNSS](https://carla.readthedocs.io/en/latest/ref_sensors/#gnss-sensor) sensor -* The vehicle needs to know its own velocity => can be calculated from last/current position and time or the [speedometer](https://leaderboard.carla.org/#map-track) pseudosensor can be used -* The vehicle needs to know its planned trajectory => [nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html) this trajectory may need to be updated to accommodate obstacles -* Longitudinal control => a simple PID controller should suffice -* lateral control => Pure Pursuit as well as Stanley controller should be implemented, following tests can show, where to use each controller. -* additional features: - * emergency breaking => this command is supposed to bypass longitudinal and lateral controllers (and should use the bug discoverd by [paf21-2](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#bugabuses)) - * additional functionality mostly should be added here ... +- The vehicle needs to know its own position => [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) or [GNSS](https://carla.readthedocs.io/en/latest/ref_sensors/#gnss-sensor) sensor +- The vehicle needs to know its own velocity => can be calculated from last/current position and time or the [speedometer](https://leaderboard.carla.org/#map-track) pseudosensor can be used +- The vehicle needs to know its planned trajectory => [nav_msgs/Path Message](https://docs.ros.org/en/api/nav_msgs/html/msg/Path.html) this trajectory may need to be updated to accommodate obstacles +- Longitudinal control => a simple PID controller should suffice +- lateral control => Pure Pursuit as well as Stanley controller should be implemented, following tests can show, where to use each controller. +- additional features: + - emergency breaking => this command is supposed to bypass longitudinal and lateral controllers (and should use the bug discoverd by [paf21-2](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#bugabuses)) + - additional functionality mostly should be added here ... ## Next steps diff --git a/doc/03_research/01_acting/03_paf21_1_acting.md b/doc/03_research/01_acting/03_paf21_1_acting.md index 602205d9..c76dad25 100644 --- a/doc/03_research/01_acting/03_paf21_1_acting.md +++ b/doc/03_research/01_acting/03_paf21_1_acting.md @@ -2,34 +2,34 @@ ## Inputs -* waypoints of the planned route -* general odometry of the vehicle +- waypoints of the planned route +- general odometry of the vehicle ## Curve Detection -* Can detect curves on the planned trajectory -* Calculates the speed in which to drive the detected Curve +- Can detect curves on the planned trajectory +- Calculates the speed in which to drive the detected Curve ![Curve](../../00_assets/research_assets/curve_detection_paf21_1.png) ## Speed Control -* [CARLA Ackermann Control](https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_ackermann_control/) -* Speed is forwarded to the CARLA vehicle via Ackermann_message, which already includes a PID controller for safe driving/accelerating etc. -* no further controlling needed -> speed can be passed as calculated +- [CARLA Ackermann Control](https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_ackermann_control/) +- Speed is forwarded to the CARLA vehicle via Ackermann_message, which already includes a PID controller for safe driving/accelerating etc. +- no further controlling needed -> speed can be passed as calculated ## Steering Control ### Straight Trajectories -* **Stanley Steering Controller** - * Calculates steering angle from offset and heading error - * includes PID controller +- **Stanley Steering Controller** + - Calculates steering angle from offset and heading error + - includes PID controller ![Stanley Controller](../../00_assets/research_assets/stanley_paf21_1.png) ### Detected Curves -* **Naive Steering Controller** (close to pure pursuit) - * uses Vehicle Position + Orientation + Waypoints - * Calculate direction to drive to as vector - * direction - orientation = Steering angle at each point in time - * speed is calculated in Curve Detection and taken as is +- **Naive Steering Controller** (close to pure pursuit) + - uses Vehicle Position + Orientation + Waypoints + - Calculate direction to drive to as vector + - direction - orientation = Steering angle at each point in time + - speed is calculated in Curve Detection and taken as is diff --git a/doc/03_research/01_acting/05_autoware_acting.md b/doc/03_research/01_acting/05_autoware_acting.md index 8ba6b880..bb84218f 100644 --- a/doc/03_research/01_acting/05_autoware_acting.md +++ b/doc/03_research/01_acting/05_autoware_acting.md @@ -2,11 +2,11 @@ ## Inputs -* Odometry (position and orientation, from Localization module) -* Trajectory (output of Planning) -* Steering Status (current steering of vehicle, from Vehicle Interface) -* Actuation Status (acceleration, steering, brake actuations, from Vehicle Interface) -* (“vehicle signal commands” directly into Vehicle Interface -> Handbrake, Hazard Lights, Headlights, Horn, Stationary Locking, Turn Indicators, Wipers etc.) +- Odometry (position and orientation, from Localization module) +- Trajectory (output of Planning) +- Steering Status (current steering of vehicle, from Vehicle Interface) +- Actuation Status (acceleration, steering, brake actuations, from Vehicle Interface) +- (“vehicle signal commands” directly into Vehicle Interface -> Handbrake, Hazard Lights, Headlights, Horn, Stationary Locking, Turn Indicators, Wipers etc.) ### General Component Architecture @@ -18,19 +18,19 @@ ## [Trajectory Follower](https://autowarefoundation.github.io/autoware.universe/main/control/trajectory_follower_base/) -* generates control command to follow reference trajectory from Planning -* computes lateral (steering) and longitudinal (velocity) controls separately -* lateral controller: mpc (model predictive) or pure pursuit -* longitudinal: “currently only” PID controller +- generates control command to follow reference trajectory from Planning +- computes lateral (steering) and longitudinal (velocity) controls separately +- lateral controller: mpc (model predictive) or pure pursuit +- longitudinal: “currently only” PID controller ## Vehicle Command Gate -* filters control commands to prevent abnormal values -* sends commands to [Vehicle Interface](https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-interfaces/components/vehicle-interface/) +- filters control commands to prevent abnormal values +- sends commands to [Vehicle Interface](https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-interfaces/components/vehicle-interface/) ## Outputs -* steering angle -* steering torque -* speed -* acceleration +- steering angle +- steering torque +- speed +- acceleration diff --git a/doc/03_research/01_acting/Readme.md b/doc/03_research/01_acting/Readme.md index b1e75e53..5bc58da5 100644 --- a/doc/03_research/01_acting/Readme.md +++ b/doc/03_research/01_acting/Readme.md @@ -2,10 +2,10 @@ This folder contains all the results of our research on acting: -* **PAF22** -* [Basics](./01_basics_acting.md) -* [Implementation](./02_implementation_acting.md) -* **PAF23** -* [PAF21_1 Acting](./03_paf21_1_acting.md) -* [PAF21_2 Acting & Pylot Control](./04_paf21_2_and_pylot_acting.md) -* [Autoware Control](./05_autoware_acting.md) +- **PAF22** +- [Basics](./01_basics_acting.md) +- [Implementation](./02_implementation_acting.md) +- **PAF23** +- [PAF21_1 Acting](./03_paf21_1_acting.md) +- [PAF21_2 Acting & Pylot Control](./04_paf21_2_and_pylot_acting.md) +- [Autoware Control](./05_autoware_acting.md) diff --git a/doc/03_research/02_perception/03_first_implementation_plan.md b/doc/03_research/02_perception/03_first_implementation_plan.md index 77498f1c..65ddbbc2 100644 --- a/doc/03_research/02_perception/03_first_implementation_plan.md +++ b/doc/03_research/02_perception/03_first_implementation_plan.md @@ -15,23 +15,24 @@ Marco Riedenauer --- -* [First Implementation Plan](#first-implementation-plan) - * [Authors](#authors) - * [Date](#date) - * [Overview](#overview) - * [Panoptic Segmentation](#panoptic-segmentation) - * [Things and Stuff](#things-and-stuff) - * [Things](#things) - * [Stuff](#stuff) - * [Segmentation Overview](#segmentation-overview) - * [Image Panoptic Segmentation](#image-panoptic-segmentation) - * [LIDAR Panoptic Segmentation](#lidar-panoptic-segmentation) - * [Position Validation](#position-validation) - * [Obstacle Detection and Object Classification](#obstacle-detection-and-object-classification) - * [Lane Detection](#lane-detection) - * [Traffic Light Detection](#traffic-light-detection) - * [Traffic Sign Detection](#traffic-sign-detection) - * [Prediction](#prediction) +- [First Implementation Plan](#first-implementation-plan) + - [Authors](#authors) + - [Date](#date) + - [Overview](#overview) + - [Panoptic Segmentation](#panoptic-segmentation) + - [Things and Stuff](#things-and-stuff) + - [Things](#things) + - [Stuff](#stuff) + - [Segmentation Overview](#segmentation-overview) + - [Image Panoptic Segmentation](#image-panoptic-segmentation) + - [LIDAR Panoptic Segmentation](#lidar-panoptic-segmentation) + - [Position Validation](#position-validation) + - [Obstacle Detection and Object Classification](#obstacle-detection-and-object-classification) + - [Lane Detection](#lane-detection) + - [Traffic Light Detection](#traffic-light-detection) + - [Traffic Sign Detection](#traffic-sign-detection) + - [Prediction](#prediction) + - [Possible Issues/Milestones](#possible-issuesmilestones) --- @@ -58,11 +59,11 @@ Stuff is the term used to define objects that don’t have proper geometry but a There are three different kinds of image segmentation: -* **Semantic Segmentation**: \ +- **Semantic Segmentation**: \ Classification of every pixel or point in an image or LIDAR map into different classes (car, person, street, ...) -* **Instance Segmentation**: \ +- **Instance Segmentation**: \ Detection of the different instances of things. -* **Panoptic Segmentation**: \ +- **Panoptic Segmentation**: \ Combination of semantic segmentation and instance segmentation. Detection of stuff plus instances of things. ![Segmentation](../../00_assets/segmentation.png) @@ -129,11 +130,11 @@ As classification net I would recommend the [net implemented by PAF21-1](https:/ Possible states are: -* green -* orange -* red -* off -* backside +- green +- orange +- red +- off +- backside --- @@ -159,11 +160,11 @@ No implementation plan yet. ## Possible Issues/Milestones -* Implement/Adapt panoptic segmentation model (EfficientPS) -* (Implement/Adapt) LIDAR panoptic segmentation model (EfficientLPS) -* Choose datasets for training -* Generate own training data for fine-tuning -* Implement classification net for traffic light/sign classification -* Find ways for lane detection -* Find solutions/implementations for the projection of LIDAR, Radar and image data -* Position validation necessary? +- Implement/Adapt panoptic segmentation model (EfficientPS) +- (Implement/Adapt) LIDAR panoptic segmentation model (EfficientLPS) +- Choose datasets for training +- Generate own training data for fine-tuning +- Implement classification net for traffic light/sign classification +- Find ways for lane detection +- Find solutions/implementations for the projection of LIDAR, Radar and image data +- Position validation necessary? diff --git a/doc/03_research/02_perception/Readme.md b/doc/03_research/02_perception/Readme.md index 364be7af..170fe63f 100644 --- a/doc/03_research/02_perception/Readme.md +++ b/doc/03_research/02_perception/Readme.md @@ -2,11 +2,11 @@ This folder contains all the results of research on perception: -* **PAF22** - * [Basics](./02_basics.md) - * [First implementation plan](./03_first_implementation_plan.md) -* **PAF23** - * [Pylot Perception](./04_pylot.md) - * [PAF_21_2 Perception](./05_Research_PAF21-Perception.md) - * [PAF_21_1_Perception](./06_paf_21_1_perception.md) -* [Autoware Perception](./05-autoware-perception.md) +- **PAF22** + - [Basics](./02_basics.md) + - [First implementation plan](./03_first_implementation_plan.md) +- **PAF23** + - [Pylot Perception](./04_pylot.md) + - [PAF_21_2 Perception](./05_Research_PAF21-Perception.md) + - [PAF_21_1_Perception](./06_paf_21_1_perception.md) +- [Autoware Perception](./05-autoware-perception.md) diff --git a/doc/03_research/03_planning/00_paf22/02_basics.md b/doc/03_research/03_planning/00_paf22/02_basics.md index b8b75532..16f63ee7 100644 --- a/doc/03_research/03_planning/00_paf22/02_basics.md +++ b/doc/03_research/03_planning/00_paf22/02_basics.md @@ -10,32 +10,32 @@ Simon Erlbacher, Niklas Vogel --- -* [Grundrecherche im Planing](#grundrecherche-im-planing) - * [Authors](#authors) - * [Datum](#datum) - * [PAF 2021-1](#paf-2021-1) - * [Vehicle Controller](#vehicle-controller) - * [Decision-Making-Component](#decision-making-component) - * [PAF 2021-2](#paf-2021-2) - * [PAF 2020 (1 & 2)](#paf-2020-1--2) - * [Informationen aus alten Projekten](#informationen-aus-alten-projekten) - * [Planning Unterteilung](#planning-unterteilung) - * [Probleme](#probleme) - * [Lokalisierung](#lokalisierung) - * [Hindernisse erkennen](#hindernisse-erkennen) - * [Sicherheitseigenschaften](#sicherheitseigenschaften) - * [Decision Making (Behaviour Planner)](#decision-making-behaviour-planner) - * [Trajektorie](#trajektorie) - * [Trajektorie Tracking](#trajektorie-tracking) - * [Offene Fragen aus dem Issue](#offene-fragen-aus-dem-issue) - * [Was ist zu tun?](#was-ist-zu-tun) - * [Eingang](#eingang) - * [Ausgang](#ausgang) - * [Wie sehen die Daten vom Leaderboard für das Global Planning aus](#wie-sehen-die-daten-vom-leaderboard-für-das-global-planning-aus) - * [Daten aus dem LB und Global planning, wie kann daraus eine Trajektorie generiert werden](#daten-aus-dem-lb-und-global-planning-wie-kann-daraus-eine-trajektorie-generiert-werden) - * [Wie sieht die Grenze zwischen global und local plan aus?](#wie-sieht-die-grenze-zwischen-global-und-local-plan-aus) - * [Müssen Staus umfahren werden?](#müssen-staus-umfahren-werden) - * [Sollgeschwindigkeitsplanung](#sollgeschwindigkeitsplanung) +- [Grundrecherche im Planing](#grundrecherche-im-planing) + - [Authors](#authors) + - [Datum](#datum) + - [PAF 2021-1](#paf-2021-1) + - [Vehicle Controller](#vehicle-controller) + - [Decision-Making-Component](#decision-making-component) + - [PAF 2021-2](#paf-2021-2) + - [PAF 2020 (1 \& 2)](#paf-2020-1--2) + - [Informationen aus alten Projekten](#informationen-aus-alten-projekten) + - [Planning Unterteilung](#planning-unterteilung) + - [Probleme](#probleme) + - [Lokalisierung](#lokalisierung) + - [Hindernisse erkennen](#hindernisse-erkennen) + - [Sicherheitseigenschaften](#sicherheitseigenschaften) + - [Decision Making (Behaviour Planner)](#decision-making-behaviour-planner) + - [Trajektorie](#trajektorie) + - [Trajektorie Tracking](#trajektorie-tracking) + - [Offene Fragen aus dem Issue](#offene-fragen-aus-dem-issue) + - [Was ist zu tun?](#was-ist-zu-tun) + - [Eingang](#eingang) + - [Ausgang](#ausgang) + - [Wie sehen die Daten vom Leaderboard für das Global Planning aus](#wie-sehen-die-daten-vom-leaderboard-für-das-global-planning-aus) + - [Daten aus dem LB und Global planning, wie kann daraus eine Trajektorie generiert werden](#daten-aus-dem-lb-und-global-planning-wie-kann-daraus-eine-trajektorie-generiert-werden) + - [Wie sieht die Grenze zwischen global und local plan aus?](#wie-sieht-die-grenze-zwischen-global-und-local-plan-aus) + - [Müssen Staus umfahren werden?](#müssen-staus-umfahren-werden) + - [Sollgeschwindigkeitsplanung](#sollgeschwindigkeitsplanung) ## [PAF 2021-1](https://github.com/ll7/paf21-1) @@ -57,15 +57,15 @@ Die Kurvendetektion berechnet die maximale Kurvengeschwindigkeit durch Ermittlun Inputs: -* Fahrzeugposition -* Fahrzeugorientierung -* Fahrzeuggeschwindigkeit -* Fahrtrajektorie +- Fahrzeugposition +- Fahrzeugorientierung +- Fahrzeuggeschwindigkeit +- Fahrtrajektorie Outputs: -* Sollgeschwindigkeit -* Lenkwinkel +- Sollgeschwindigkeit +- Lenkwinkel ### Decision-Making-Component @@ -82,16 +82,16 @@ Finite-state machine für Manöver: Inputs: -* Geschwindigkeit -* Objekt auf Trajektorie -* Ampelsignale -* Geschwindigkeitsbegrenzung -* Geschwindigkeit und Position anderer Verkehrsteilnehmer -* Target Lane +- Geschwindigkeit +- Objekt auf Trajektorie +- Ampelsignale +- Geschwindigkeitsbegrenzung +- Geschwindigkeit und Position anderer Verkehrsteilnehmer +- Target Lane Outputs: -* "Actions" (Bremsen, Beschleunigen, Halten, Spurwechsel...) +- "Actions" (Bremsen, Beschleunigen, Halten, Spurwechsel...) Globaler Planer Überblick: ![Alt text](https://github.com/ll7/paf21-1/raw/master/imgs/Global%20Planer.png) @@ -100,21 +100,21 @@ Globaler Planer Überblick: verantwortlich für die Routenplanung und Pfadplanung für das Ego-Vehicle sowie die erkannten Verkehrsteilnehmer. -* global_planner - * Planung einer Route von einem Startpunkt zu einem oder einer Liste an Zielpunkten - * Commonroad Route Planner (TUM) -> Liste an Routen-Lanelets sowie eine Liste an Punkten mit Abstand etwa 10cm - * Anreicherung mit parallelen Spuren -* local_planner - * Lokale Pfadplanung inklusive Spurwahl, Ampelmanagement und Spurwechsel - * erlaubte Geschwindigkeit, sowie die bevorzugte Spur basierend auf der Hinderniserkennung (obstacle planner) wird ergänzt - * "beste"/schnellste Möglichkeit wird errechnet und weiter an acting geschickt -* obstacle_planner - * Verwaltung von dynamischen hindernissen - * Vorhersage von Pfaden anderer Fahrzeuge und generieren von Folgefahrzeug-Informationen - * Verwerfen von "irrelevanten" Fahrezeugen - -* Geschwindigkeitsplanung/Kontrolle wie 2021-1 + Bremswegplanung [Details](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_planning#bremsweg) -* Map Manager für die Verwaltung aller statischen Kartendaten +- global_planner + - Planung einer Route von einem Startpunkt zu einem oder einer Liste an Zielpunkten + - Commonroad Route Planner (TUM) -> Liste an Routen-Lanelets sowie eine Liste an Punkten mit Abstand etwa 10cm + - Anreicherung mit parallelen Spuren +- local_planner + - Lokale Pfadplanung inklusive Spurwahl, Ampelmanagement und Spurwechsel + - erlaubte Geschwindigkeit, sowie die bevorzugte Spur basierend auf der Hinderniserkennung (obstacle planner) wird ergänzt + - "beste"/schnellste Möglichkeit wird errechnet und weiter an acting geschickt +- obstacle_planner + - Verwaltung von dynamischen hindernissen + - Vorhersage von Pfaden anderer Fahrzeuge und generieren von Folgefahrzeug-Informationen + - Verwerfen von "irrelevanten" Fahrezeugen + +- Geschwindigkeitsplanung/Kontrolle wie 2021-1 + Bremswegplanung [Details](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_planning#bremsweg) +- Map Manager für die Verwaltung aller statischen Kartendaten ## PAF 2020 ([1](https://github.com/ll7/psaf1) & [2](https://github.com/ll7/psaf2)) @@ -129,14 +129,14 @@ Teilbaum "Intersection" als Beispiel: "If there is a Intersection coming up the agent executes the following sequence of behaviours: -* Approach Intersection - * Slows down, gets into the right lane for turning and stops at line -* Wait at Intersection - * Waits for traffic lights or higher priority traffic -* Enter Intersection - * Enters the intersection and stops again, if there is higher priority oncoming traffic -* Leave Intersection - * Leaves the intersection in the right direction" +- Approach Intersection + - Slows down, gets into the right lane for turning and stops at line +- Wait at Intersection + - Waits for traffic lights or higher priority traffic +- Enter Intersection + - Enters the intersection and stops again, if there is higher priority oncoming traffic +- Leave Intersection + - Leaves the intersection in the right direction" [Kompletter Entscheidungsbaum](https://github.com/ll7/psaf2/tree/main/Planning/behavior_agent) @@ -144,8 +144,8 @@ Teilbaum "Intersection" als Beispiel: Quellen: -* -* +- +- ![architektur gewinnterteam19](../../00_assets/gewinnerteam19-architektur.png) @@ -163,8 +163,8 @@ Planning Übersicht ## Probleme -* Kollision mit statischen Objekten (Gehsteig) -* Kollision mit Fußgängern die unerwartetes Verhalten zeigen +- Kollision mit statischen Objekten (Gehsteig) +- Kollision mit Fußgängern die unerwartetes Verhalten zeigen Es wird vorgeschlagen ein festes Notfallmanöver für das Fahrzeug zu erstellen, welches mit einer schnelleren Reaktionszeit greift, um unerwartete Kollisionen zu verhindern. @@ -221,16 +221,16 @@ Einfache Berechnung einer Kollision Wichtig ist die Sicherheitseigenschaft von Autonomen Fahrzeugen. Risiken können in drei KLassen unterteilt werden: -* Kollision mit statischen Objekten -* Kollision mit dynamischen Objekten -* Kollision mit unerwarteten Objekten +- Kollision mit statischen Objekten +- Kollision mit dynamischen Objekten +- Kollision mit unerwarteten Objekten In dem Beispielprojekt wurde eine Bewertung der Überlappung von Trajekotrien verschiedener Objekte zur HAnd genommen. Es wird eine mögliche Kollisionszone bestimmt. Das Fahrzeug hat hierbei drei Zonen auf seiner Trajektorie. -* Danger Zone: Hier muss sofort gestoppt werden wenn ein Trajektorien Konflikt detektiert wird -* Warning Zone: Hier entsprechend die Geschwindigkeit anpassen im Verhältnis zu der DTC (distance to collision) -* Safe Zone +- Danger Zone: Hier muss sofort gestoppt werden wenn ein Trajektorien Konflikt detektiert wird +- Warning Zone: Hier entsprechend die Geschwindigkeit anpassen im Verhältnis zu der DTC (distance to collision) +- Safe Zone Die Kollision benötigt die Position eines möglichen Kollisionsgegenstandes und seine Form. Wenn die Orientierung und die Geschwindigkeit verfügbar sind, kann eine Vorhersage zu der zukünftigen Position getroffen werden, um Konflikte zu vermeiden. @@ -243,9 +243,9 @@ Annahme: Alle Verkehrsteilnehmer haben konstante Geschwindigkeit (sonst Berechnu Verkehrsszenario einer Kreuzung mit verschiedenen Zonen. -* Roter Bereich: Fahrzeug verlangsamt seine Geschwindigkeit -* Grüner Bereich: Fahrzeug kommt zum stehen -* Oranger Bereich (Intersection): Fahrzeug betritt diesen Bereich nur,wenn kein anderer Verkehrsteilnehmer in diesem erkannt wird +- Roter Bereich: Fahrzeug verlangsamt seine Geschwindigkeit +- Grüner Bereich: Fahrzeug kommt zum stehen +- Oranger Bereich (Intersection): Fahrzeug betritt diesen Bereich nur,wenn kein anderer Verkehrsteilnehmer in diesem erkannt wird ![statemachines](../../00_assets/statemachines.png) Aufteilung in mehrere state machines @@ -253,13 +253,13 @@ Aufteilung in mehrere state machines Eine state machine oder Aufteileung in mehrere state machines Vorteile von mehreren state machines: -* Geringere Berechnungszeit -* einfacher zu erstellen und Instand zu halten +- Geringere Berechnungszeit +- einfacher zu erstellen und Instand zu halten Nachteile von mehreren state machines: -* Sehr viele Regeln -* Regeln zwischen state machines können sich wiederholen +- Sehr viele Regeln +- Regeln zwischen state machines können sich wiederholen Reinforcement Learning, Rule based System, Markov Decision Process @@ -289,8 +289,8 @@ Fehlerminimierung bei der Trajektorienberechnung ## Trajektorie Tracking -* Stanley Controller -* Pure Pursuit Controller +- Stanley Controller +- Pure Pursuit Controller ## Offene Fragen aus dem [Issue](https://github.com/ll7/paf22/issues/26) @@ -302,45 +302,45 @@ Dabei werden andere Fahrzeuge im näheren Umfeld des eigenen Fahrzeugs auch in d ### Eingang -* Fahrzeugposition -* Fahrzeugorientierung -* Fahrzeuggeschwindigkeit -* Fahrtrajektorie bzw anzufahrende Punkte aus denen trajektorie errechnet werden kann -* Objekte auf Trajektorie -* Ampelsignale und Verkehrsschilder -* Geschwindigkeitsbegrenzung -* Geschwindigkeit und Position anderer Verkehrsteilnehmer -* Target Lane +- Fahrzeugposition +- Fahrzeugorientierung +- Fahrzeuggeschwindigkeit +- Fahrtrajektorie bzw anzufahrende Punkte aus denen trajektorie errechnet werden kann +- Objekte auf Trajektorie +- Ampelsignale und Verkehrsschilder +- Geschwindigkeitsbegrenzung +- Geschwindigkeit und Position anderer Verkehrsteilnehmer +- Target Lane ### Ausgang -* "Actions" - * Bremsen - * Beschleunigen - * Halten - * Spurwechsel - * ... +- "Actions" + - Bremsen + - Beschleunigen + - Halten + - Spurwechsel + - ... Oder -* Sollgeschwindigkeit -* Lenkwinkel +- Sollgeschwindigkeit +- Lenkwinkel ### Wie sehen die Daten vom Leaderboard für das Global Planning aus "For each route, agents will be initialized at a starting point and directed to drive to a destination point, provided with a description of the route through GPS style coordinates, map coordinates and route instructions." -* GPS coordinates Beispiel: - * {'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707} -* Map/World coordinates Beispiel: - * {'x': 153.7, 'y': 15.6, 'z': 0.0} -* Route Instructions: - * RoadOption.CHANGELANELEFT: Move one lane to the left. - * RoadOption.CHANGELANERIGHT: Move one lane to the right. - * RoadOption.LANEFOLLOW: Continue in the current lane. - * RoadOption.LEFT: Turn left at the intersection. - * RoadOption.RIGHT: Turn right at the intersection. - * RoadOption.STRAIGHT: Keep straight at the intersection. +- GPS coordinates Beispiel: + - {'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707} +- Map/World coordinates Beispiel: + - {'x': 153.7, 'y': 15.6, 'z': 0.0} +- Route Instructions: + - RoadOption.CHANGELANELEFT: Move one lane to the left. + - RoadOption.CHANGELANERIGHT: Move one lane to the right. + - RoadOption.LANEFOLLOW: Continue in the current lane. + - RoadOption.LEFT: Turn left at the intersection. + - RoadOption.RIGHT: Turn right at the intersection. + - RoadOption.STRAIGHT: Keep straight at the intersection. "The distance between two consecutive waypoints could be up to hundreds of meters. Do not rely on these as your principal mechanism to navigate the environment." @@ -352,9 +352,9 @@ Des Weiteren steh als globale Map ein OpenDRIVE file als String geparsed zur Ver [Beispiel 2021-2](#paf-2021-2): -* global_planner (Planung einer Route von einem Startpunkt zu einem oder einer Liste an Zielpunkten) - * Commonroad Route Planner (TUM) -> Liste an Routen-Lanelets sowie eine Liste an Punkten mit Abstand etwa 10cm - * (Anreicherung mit parallelen Spuren) +- global_planner (Planung einer Route von einem Startpunkt zu einem oder einer Liste an Zielpunkten) + - Commonroad Route Planner (TUM) -> Liste an Routen-Lanelets sowie eine Liste an Punkten mit Abstand etwa 10cm + - (Anreicherung mit parallelen Spuren) ### Wie sieht die Grenze zwischen global und local plan aus? @@ -372,12 +372,12 @@ Route deviation — If an agent deviates more than 30 meters from the assigned r ### Sollgeschwindigkeitsplanung -* Schilder - * vor Ampeln, Schildern, Kreisverkehren, Kreuzungen verzögern und langsamer werden -* Kurvenfahrt - * siehe [maximale Kurvengeschwindigkeit](#vehicle-controller) -* Vorausfahrendes Auto - * Geschwindigkeit an dieses Anpassen oder überholen wenn möglich -* Straßenverhältnisse - * "variety of situations: including freeways, urban areas, residential districts and rural settings" - * "variety of weather conditions: including daylight scenes, sunset, rain, fog, and night, among others" +- Schilder + - vor Ampeln, Schildern, Kreisverkehren, Kreuzungen verzögern und langsamer werden +- Kurvenfahrt + - siehe [maximale Kurvengeschwindigkeit](#vehicle-controller) +- Vorausfahrendes Auto + - Geschwindigkeit an dieses Anpassen oder überholen wenn möglich +- Straßenverhältnisse + - "variety of situations: including freeways, urban areas, residential districts and rural settings" + - "variety of weather conditions: including daylight scenes, sunset, rain, fog, and night, among others" diff --git a/doc/03_research/03_planning/00_paf22/03_Implementation.md b/doc/03_research/03_planning/00_paf22/03_Implementation.md index 534f4142..adfaa6dd 100644 --- a/doc/03_research/03_planning/00_paf22/03_Implementation.md +++ b/doc/03_research/03_planning/00_paf22/03_Implementation.md @@ -16,15 +16,18 @@ Simon Erlbacher, Niklas Vogel --- -* [Planning Implementation](#planning-implementation) - * [Authors](#authors) - * [Date](#date) - * [Overview](#overview) - * [Preplanning](#preplanning) - * [Decision Making](#decision-making) - * [Local Path Planning](#local-path-planning) - * [Next steps](#next-steps) -* [Sources](#sources) +- [Planning Implementation](#planning-implementation) + - [Authors](#authors) + - [Date](#date) + - [Overview](#overview) + - [Preplanning](#preplanning) + - [Decision Making](#decision-making) + - [Local Path Planning](#local-path-planning) + - [Velocity profile](#velocity-profile) + - [Update path](#update-path) + - [Measure distance](#measure-distance) + - [Next steps](#next-steps) + - [Sources](#sources) --- @@ -50,14 +53,14 @@ Lanelet Model Example : Input: -* Map -* Navigation Waypoints -* (Odometry data (sensoring)) -* (GNUU data (sensoring)) +- Map +- Navigation Waypoints +- (Odometry data (sensoring)) +- (GNUU data (sensoring)) Output: -* Route (Sequences of Lanelets and Points) (local path planning, decision making) +- Route (Sequences of Lanelets and Points) (local path planning, decision making) --- @@ -74,13 +77,13 @@ The system needs to make good predictions to avoid collisions. The Perception da Input: -* Lanelet data (preplanning, local path planning) -* perception data (traffic lights situation, pedestrians,...) +- Lanelet data (preplanning, local path planning) +- perception data (traffic lights situation, pedestrians,...) Output: -* updated driving status (acting, local path planning) -* Lanelet data (acting) +- updated driving status (acting, local path planning) +- Lanelet data (acting) --- @@ -96,11 +99,11 @@ This will be calculated directly after the preplanning created a trajectory. The Input: -* Trajectory points (preplanning) +- Trajectory points (preplanning) Output: -* Max. Velocity (Acting) +- Max. Velocity (Acting) ### Update path @@ -111,14 +114,14 @@ It also tells the velocity profile to update because of the new trajectory. Input: -* lanelet modell (preplanning) -* update command (decision making) -* information about blocked lanelets (decision making, perception) +- lanelet modell (preplanning) +- update command (decision making) +- information about blocked lanelets (decision making, perception) Output: -* updated trajectory (acting, decision making) -* update command (velocity profile) +- updated trajectory (acting, decision making) +- update command (velocity profile) ### Measure distance @@ -126,24 +129,24 @@ This module measures the distance to obstacles, especially cars, with the Lidar Input: -* Lidar Sensor data (perception, sensoring) +- Lidar Sensor data (perception, sensoring) Output: -* distance value (acting) +- distance value (acting) --- ## Next steps -* Another Coordination with Perception to prevent overlaps with Map Manager, Map enrichment, -* Implement Map Manager to convert data into a compatible type for route planning and to extract additional informations (Speed Limits, trafic signs, traffic lights) -* Implement a commonroad route planner (old projects and Gitlab TUM) -* Analyze Lanelet plan and be familiar with it (Which information can we additionally receive from the plan?) -* Enrich Lanelet Modell/Map with additional Informations (additional/parallel Lanes, Speed Limits, trafic signs, traffic lights) -* Choose the Decision Maker (Evaluate Markov Modell in combination with occupancy grid) -* calculate and evaluate distances with given perceptions -* Publish available and needed data (data available in this stage) +- Another Coordination with Perception to prevent overlaps with Map Manager, Map enrichment, +- Implement Map Manager to convert data into a compatible type for route planning and to extract additional informations (Speed Limits, trafic signs, traffic lights) +- Implement a commonroad route planner (old projects and Gitlab TUM) +- Analyze Lanelet plan and be familiar with it (Which information can we additionally receive from the plan?) +- Enrich Lanelet Modell/Map with additional Informations (additional/parallel Lanes, Speed Limits, trafic signs, traffic lights) +- Choose the Decision Maker (Evaluate Markov Modell in combination with occupancy grid) +- calculate and evaluate distances with given perceptions +- Publish available and needed data (data available in this stage) --- diff --git a/doc/03_research/03_planning/00_paf22/04_decision_making.md b/doc/03_research/03_planning/00_paf22/04_decision_making.md index f28a01fe..eea8b21f 100644 --- a/doc/03_research/03_planning/00_paf22/04_decision_making.md +++ b/doc/03_research/03_planning/00_paf22/04_decision_making.md @@ -16,30 +16,44 @@ Josef Kircher --- -* [Decision-making module](#decision-making-module) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Decision-making algorithms](#decision-making-algorithms) - * [Finite State machine](#finite-state-machine) - * [Markov Chain](#markov-chain) - * [Decision Tree](#decision-tree) - * [Previous approaches](#previous-approaches) - * [PAF21-1](#paf21-1) - * [PAF21-2](#paf21-2) - * [PSAF1 2020](#psaf1-2020) - * [PSAF2 2020](#psaf2-2020) - * [Python or ROS libraries for these decision-making algorithms](#python-or-ros-libraries-for-these-decision-making-algorithms) - * [State machines](#state-machines) - * [SMACH](#smach) - * [SMACC](#smacc) - * [Markov Chains](#markov-chains) - * [QuantEcon](#quantecon) - * [markov_decision_making](#markov_decision_making) - * [Decision trees](#decision-trees) - * [pytrees](#pytrees) - * [Conclusion](#conclusion) - * [Sources](#sources) +- [Decision-making module](#decision-making-module) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Decision-making algorithms](#decision-making-algorithms) + - [Finite State machine](#finite-state-machine) + - [Advantages](#advantages) + - [Disadvantages](#disadvantages) + - [Markov Chain](#markov-chain) + - [Advantages](#advantages-1) + - [Disadvantages](#disadvantages-1) + - [Decision Tree](#decision-tree) + - [Advantages](#advantages-2) + - [Disadvantages](#disadvantages-2) + - [Previous approaches](#previous-approaches) + - [PAF21-1](#paf21-1) + - [State machine](#state-machine) + - [Take away](#take-away) + - [PAF21-2](#paf21-2) + - [No clear concept](#no-clear-concept) + - [Take away](#take-away-1) + - [PSAF1 2020](#psaf1-2020) + - [State machine](#state-machine-1) + - [Take away](#take-away-2) + - [PSAF2 2020](#psaf2-2020) + - [Decision tree](#decision-tree-1) + - [Take Away](#take-away-3) + - [Python or ROS libraries for these decision-making algorithms](#python-or-ros-libraries-for-these-decision-making-algorithms) + - [State machines](#state-machines) + - [SMACH](#smach) + - [SMACC](#smacc) + - [Markov Chains](#markov-chains) + - [QuantEcon](#quantecon) + - [markov\_decision\_making](#markov_decision_making) + - [Decision trees](#decision-trees) + - [pytrees](#pytrees) + - [Conclusion](#conclusion) + - [Sources](#sources) ## Decision-making algorithms @@ -54,14 +68,14 @@ Finite-state machines are of two types—deterministic finite-state machines and #### Advantages -* easy to implement -* we know most of the scenarios (finite state space) -* previous groups have solutions we could adapt/extend +- easy to implement +- we know most of the scenarios (finite state space) +- previous groups have solutions we could adapt/extend #### Disadvantages -* many states necessary -* even though we can try to map all possible states, there still might be some situation we could not account for +- many states necessary +- even though we can try to map all possible states, there still might be some situation we could not account for ### Markov Chain @@ -70,14 +84,14 @@ A countably infinite sequence, in which the chain moves state at discrete time s #### Advantages -* possible to build Markov Chain from State machine -* experience from previous projects -* only depends on current state ("memorylessness") +- possible to build Markov Chain from State machine +- experience from previous projects +- only depends on current state ("memorylessness") #### Disadvantages -* might be complicated to implement -* probabilities for transitions might need to be guessed, empirically estimated +- might be complicated to implement +- probabilities for transitions might need to be guessed, empirically estimated ### Decision Tree @@ -86,13 +100,13 @@ It is one way to display an algorithm that only contains conditional control sta #### Advantages -* easy implementation -* tree like structure usable in Machine Learning (Random Forest e.g.) +- easy implementation +- tree like structure usable in Machine Learning (Random Forest e.g.) #### Disadvantages -* multiple decision trees necessary -* prediction independent of previous state +- multiple decision trees necessary +- prediction independent of previous state ## Previous approaches @@ -100,57 +114,57 @@ It is one way to display an algorithm that only contains conditional control sta #### State machine -* 2 state machines: one for maneuvers, one for speed control -* Speed control more complex, when to brake seems like the most challenging task +- 2 state machines: one for maneuvers, one for speed control +- Speed control more complex, when to brake seems like the most challenging task #### Take away -* Some states seem to be comparable to what we are required to accomplish by the leaderboard -* Our task might be more complex, needs additional states and transitions -* I'm uncertain about an extra speed state, might be easier to handle that more locally by the local planner, maybe in combination with an observer element that keeps track of the surrounding by processing the information from `Perception` +- Some states seem to be comparable to what we are required to accomplish by the leaderboard +- Our task might be more complex, needs additional states and transitions +- I'm uncertain about an extra speed state, might be easier to handle that more locally by the local planner, maybe in combination with an observer element that keeps track of the surrounding by processing the information from `Perception` ### PAF21-2 #### No clear concept -* some sort of state machine integrated in local planner -* obstacle planner for dynamic obstacles (pedestrians, cars, bicycles) -* useful parameters which we could adapt -* path prediction for obstacles -* obstacles are only interesting if they cross the path of the ego vehicle +- some sort of state machine integrated in local planner +- obstacle planner for dynamic obstacles (pedestrians, cars, bicycles) +- useful parameters which we could adapt +- path prediction for obstacles +- obstacles are only interesting if they cross the path of the ego vehicle #### Take away -* Obstacle planner might be useful for dynamic obstacle detection if not handled elsewhere -* path prediction might reduce the number objects tracked that we could interfere with -* Also, if we adapt our local plan this path prediction of other vehicles might come in handy -* On the other hand, overhead to keep track of vehicles and maybe repredict paths if some vehicles change direction +- Obstacle planner might be useful for dynamic obstacle detection if not handled elsewhere +- path prediction might reduce the number objects tracked that we could interfere with +- Also, if we adapt our local plan this path prediction of other vehicles might come in handy +- On the other hand, overhead to keep track of vehicles and maybe repredict paths if some vehicles change direction ### PSAF1 2020 #### State machine -* Three driving functions: Driving, stopping at traffic light, stopping at stop sign -* First project iteration so state machine more simple -* still covers many important scenarios +- Three driving functions: Driving, stopping at traffic light, stopping at stop sign +- First project iteration so state machine more simple +- still covers many important scenarios #### Take away -* Good starting point to have a minimal viable state machine -* Need adaption depending on what information we are getting forwarded/process in the planning module +- Good starting point to have a minimal viable state machine +- Need adaption depending on what information we are getting forwarded/process in the planning module ### PSAF2 2020 #### Decision tree -* This team used a decision tree to cover the major driving scenarios -* Within the scenarios the actions are more linear -* Reminds me of the execution of a state where driving scenarios are the states and the execution the things our local planner should do within that state +- This team used a decision tree to cover the major driving scenarios +- Within the scenarios the actions are more linear +- Reminds me of the execution of a state where driving scenarios are the states and the execution the things our local planner should do within that state #### Take Away -* Even though the approach is different, the execution might be similar to the other team algorithms -* We might not be interested in a decision tree as we want to keep the option to switch to a Markov chain, which would be more overhead if we start with a decision tree +- Even though the approach is different, the execution might be similar to the other team algorithms +- We might not be interested in a decision tree as we want to keep the option to switch to a Markov chain, which would be more overhead if we start with a decision tree ## Python or ROS libraries for these decision-making algorithms @@ -158,71 +172,71 @@ It is one way to display an algorithm that only contains conditional control sta #### SMACH -* Task-level architecture for creating state machines for robot behaviour. -* Based on Python -* Fast prototyping: Quickly create state machines -* Complex state machines can easily be created -* Introspection: smach_viewer provides a visual aid to follow the state machine executing its tasks - * smach_viewer is unmaintained and does not work with noetic -* Allows nested state machines -* Values can be passed between states -* Tutorials and documentation seems to be easy to understand so creating a first state machine shouldn't be too hard -* working with several ROS topics and messages within the state machine needs to be evaluated: - * the execution of states is mostly planned to happen in the local planner so for just sending a ROS message, SMACH might be efficient +- Task-level architecture for creating state machines for robot behaviour. +- Based on Python +- Fast prototyping: Quickly create state machines +- Complex state machines can easily be created +- Introspection: smach_viewer provides a visual aid to follow the state machine executing its tasks + - smach_viewer is unmaintained and does not work with noetic +- Allows nested state machines +- Values can be passed between states +- Tutorials and documentation seems to be easy to understand so creating a first state machine shouldn't be too hard +- working with several ROS topics and messages within the state machine needs to be evaluated: + - the execution of states is mostly planned to happen in the local planner so for just sending a ROS message, SMACH might be efficient Not use SMACH for: -* Unstructured tasks: SMACH is not efficient in sheduling unstructured tasks -* Low-level systems: SMACH is not build for high efficiency, might fall short for emergency maneuvers +- Unstructured tasks: SMACH is not efficient in sheduling unstructured tasks +- Low-level systems: SMACH is not build for high efficiency, might fall short for emergency maneuvers -* Simple examples run without problem +- Simple examples run without problem #### SMACC -* event-driven, asynchronous, behavioral state machine library -* real-time ROS applications -* written in C++ -* designed to allow programmers to build robot control applications for multicomponent robots, in an intuitive and systematic manner. -* well maintained, lots of prebuild state machines to possibly start from +- event-driven, asynchronous, behavioral state machine library +- real-time ROS applications +- written in C++ +- designed to allow programmers to build robot control applications for multicomponent robots, in an intuitive and systematic manner. +- well maintained, lots of prebuild state machines to possibly start from Why not use SMACC: -* might get some time to get back into C++ -* more sophisticated library might need more time to get used to -* awful country music in the back of tutorial videos +- might get some time to get back into C++ +- more sophisticated library might need more time to get used to +- awful country music in the back of tutorial videos -* Tutorials do not run without further debugging which I didn't invest the time to do so +- Tutorials do not run without further debugging which I didn't invest the time to do so ### Markov Chains #### QuantEcon -* a economics library for implementing Markov chains -* more focussed on simulation than actually using it in an AD agent -* maybe usable for testing and simulating a Markov chain before implementing it +- a economics library for implementing Markov chains +- more focussed on simulation than actually using it in an AD agent +- maybe usable for testing and simulating a Markov chain before implementing it #### markov_decision_making -* ROS library for robot decision-making based on Markov Decision Problems -* written in C++ -* callback-based action interpretation allows to use other frameworks (SMACH) -* relatively easy to implement hierarchical MDPs -* supports synchronous and asynchronous execution +- ROS library for robot decision-making based on Markov Decision Problems +- written in C++ +- callback-based action interpretation allows to use other frameworks (SMACH) +- relatively easy to implement hierarchical MDPs +- supports synchronous and asynchronous execution Why not use markov_decision_making: -* not maintained -* only works with ROS hydro +- not maintained +- only works with ROS hydro ### Decision trees #### pytrees -* easy framework for implementing behaviour trees -* written in python -* used by a group two years ago -* not usable for real-time application code according to their docs -* priority handling - higher level interrupts are handled first +- easy framework for implementing behaviour trees +- written in python +- used by a group two years ago +- not usable for real-time application code according to their docs +- priority handling - higher level interrupts are handled first ## Conclusion diff --git a/doc/03_research/03_planning/00_paf22/05_Navigation_Data.md b/doc/03_research/03_planning/00_paf22/05_Navigation_Data.md index 2d9047ca..18611513 100644 --- a/doc/03_research/03_planning/00_paf22/05_Navigation_Data.md +++ b/doc/03_research/03_planning/00_paf22/05_Navigation_Data.md @@ -14,13 +14,13 @@ Niklas Vogel --- -* [Navigation Data Research](#navigation-data-research) - * [Author](#author) - * [Date](#date) - * [How to receive navigation data](#how-to-receive-navigation-data) - * [Structure of navigation data](#structure-of-navigation-data) - * [Visualisation of received navigation data](#visualisation-of-received-navigation-data) -* [Sources](#sources) +- [Navigation Data Research](#navigation-data-research) + - [Author](#author) + - [Date](#date) + - [How to receive navigation data](#how-to-receive-navigation-data) + - [Structure of navigation data](#structure-of-navigation-data) + - [Visualisation of received navigation data](#visualisation-of-received-navigation-data) + - [Sources](#sources) ## How to receive navigation data @@ -58,15 +58,15 @@ Therefore, the Map is published as topic ``/carla/hero/OpenDrive`` in [OpenDRIVE The route is published in the following topics: -* ``/carla/hero/global_plan`` ([carla_msgs/CarlaRoute](https://github.com/carla-simulator/ros-carla-msgs/blob/leaderboard-2.0/msg/CarlaRoute.msg)) -* ``/carla/hero/global_plan_gnss`` ([carla_msgs/CarlaGnnsRoute](https://github.com/carla-simulator/ros-carla-msgs/blob/leaderboard-2.0/msg/CarlaGnssRoute.msg)) +- ``/carla/hero/global_plan`` ([carla_msgs/CarlaRoute](https://github.com/carla-simulator/ros-carla-msgs/blob/leaderboard-2.0/msg/CarlaRoute.msg)) +- ``/carla/hero/global_plan_gnss`` ([carla_msgs/CarlaGnnsRoute](https://github.com/carla-simulator/ros-carla-msgs/blob/leaderboard-2.0/msg/CarlaGnssRoute.msg)) ## Structure of navigation data Routes consist of tuples of a position and a high level route instruction command which should be taken at that point. Positions are either given as GPS coordinates or as world coordinates: -* GPS coordinates: +- GPS coordinates: ```yaml [({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707}, RoadOption.LEFT), @@ -75,7 +75,7 @@ Positions are either given as GPS coordinates or as world coordinates: ({'z': 0.0, 'lat': 48.99822679980298, 'lon': 8.002735250105061}, RoadOption.STRAIGHT)] ``` -* World coordinates: +- World coordinates: ```yaml [({'x': 153.7, 'y': 15.6, 'z': 0.0}, RoadOption.LEFT), @@ -84,14 +84,14 @@ Positions are either given as GPS coordinates or as world coordinates: ({'x': 180.7, 'y': 45.1, 'z': 1.2}, RoadOption.STRAIGHT)] ``` -* High-level route instruction commands (road options): +- High-level route instruction commands (road options): - * RoadOption.**CHANGELANELEFT**: Move one lane to the left. - * RoadOption.**CHANGELANERIGHT**: Move one lane to the right. - * RoadOption.**LANEFOLLOW**: Continue in the current lane. - * RoadOption.**LEFT**: Turn left at the intersection. - * RoadOption.**RIGHT**: Turn right at the intersection. - * RoadOption.**STRAIGHT**: Keep straight at the intersection. + - RoadOption.**CHANGELANELEFT**: Move one lane to the left. + - RoadOption.**CHANGELANERIGHT**: Move one lane to the right. + - RoadOption.**LANEFOLLOW**: Continue in the current lane. + - RoadOption.**LEFT**: Turn left at the intersection. + - RoadOption.**RIGHT**: Turn right at the intersection. + - RoadOption.**STRAIGHT**: Keep straight at the intersection. **Important:** Distance between route points can be up to hundreds of meters. @@ -103,7 +103,7 @@ WIP notes from team intern meeting: -* leaderboard evaluation visualisiert die route und scenarien evtl schon... evtl wert genauer zu betrachten +- leaderboard evaluation visualisiert die route und scenarien evtl schon... evtl wert genauer zu betrachten ### Sources diff --git a/doc/03_research/03_planning/00_paf22/06_state_machine_design.md b/doc/03_research/03_planning/00_paf22/06_state_machine_design.md index e90a785f..ad57715d 100644 --- a/doc/03_research/03_planning/00_paf22/06_state_machine_design.md +++ b/doc/03_research/03_planning/00_paf22/06_state_machine_design.md @@ -14,33 +14,32 @@ Josef Kircher --- -* [Title of wiki page](#title-of-wiki-page) - * [Author](#author) - * [Date](#date) - * [Super state machine](#super-state-machine) - * [Driving state machine](#driving-state-machine) - * [KEEP](#keep) - * [ACCEL](#accel) - * [Brake](#brake) - * [Lane change state machine](#lane-change-state-machine) - * [DECIDE_LANE_CHANGE](#decidelanechange) - * [CHANGE_LANE_LEFT](#changelaneleft) - * [CHANGE_LANE_RIGHT](#changelaneright) - * [Intersection state machine](#intersection-state-machine) - * [APPROACH_INTERSECTION](#approachintersection) - * [IN_INTERSECTION](#inintersection) - * [TURN_LEFT](#turnleft) - * [STRAIGHT](#straight) - * [TURN_RIGHT](#turnright) - * [LEAVE_INTERSECTION](#leaveintersection) - * [Stop sign/traffic light state machine](#stop-signtraffic-light-state-machine) - * [STOP_NEAR](#stopnear) - * [STOP_SLOW_DOWN](#stopslowdown) - * [STOP_WILL_STOP](#stopwillstop) - * [STOP_WAIT](#stopwait) - * [STOP_GO](#stopgo) - * [Implementation](#implementation) - * [Sources](#sources) +- [State machine design](#state-machine-design) + - [Author](#author) + - [Date](#date) + - [Super state machine](#super-state-machine) + - [Driving state machine](#driving-state-machine) + - [KEEP](#keep) + - [UPDATE\_TARGET\_SPEED](#update_target_speed) + - [Lane change state machine](#lane-change-state-machine) + - [DECIDE\_LANE\_CHANGE](#decide_lane_change) + - [CHANGE\_LANE\_LEFT](#change_lane_left) + - [CHANGE\_LANE\_RIGHT](#change_lane_right) + - [Intersection state machine](#intersection-state-machine) + - [APPROACH\_INTERSECTION](#approach_intersection) + - [IN\_INTERSECTION](#in_intersection) + - [TURN\_LEFT](#turn_left) + - [STRAIGHT](#straight) + - [TURN\_RIGHT](#turn_right) + - [LEAVE\_INTERSECTION](#leave_intersection) + - [Stop sign/traffic light state machine](#stop-signtraffic-light-state-machine) + - [STOP\_NEAR](#stop_near) + - [STOP\_SLOW\_DOWN](#stop_slow_down) + - [STOP\_WILL\_STOP](#stop_will_stop) + - [STOP\_WAIT](#stop_wait) + - [STOP\_GO](#stop_go) + - [Implementation](#implementation) + - [Sources](#sources) ## Super state machine @@ -51,9 +50,9 @@ The super state machine functions as a controller of the main functions of the a Those functions are -* following the road and brake in front of obstacles if needed -* drive across an intersection -* change lane +- following the road and brake in front of obstacles if needed +- drive across an intersection +- change lane ## Driving state machine @@ -61,8 +60,8 @@ Those functions are Transition: -* From `Intersection state machine` -* From `Lane change state machine` +- From `Intersection state machine` +- From `Lane change state machine` This state machine controls the speed of the ego-vehicle. It either tells the acting part of the ego vehicle to `UPDATE_TARGET_SPEED` or `KEEP` the velocity. @@ -74,7 +73,7 @@ If there is an event requiring the ego-vehicle to change the lane as mentioned i Transition: -* From `UPDATE_TARGET_SPEED` +- From `UPDATE_TARGET_SPEED` Keep the current target speed, applied most of the time. From here changes to the `UPDATE_TARGET_SPEED` state are performed, if events require a change of `target_speed`. @@ -82,7 +81,7 @@ Keep the current target speed, applied most of the time. From here changes to th Transition: -* From `KEEP` if `new target_speed` is smaller or greater than current `target_speed` or an `obstacle` or the `leading_vehicle` is in braking distance. +- From `KEEP` if `new target_speed` is smaller or greater than current `target_speed` or an `obstacle` or the `leading_vehicle` is in braking distance. Set a new target speed and change back to `KEEP` state afterwards. @@ -92,26 +91,26 @@ Set a new target speed and change back to `KEEP` state afterwards. Transition: -* From `driving state machine` by `lane_change_requested` +- From `driving state machine` by `lane_change_requested` This state machine completes the change of a lane. This is triggered from the super state machine and can have multiple triggers. Those include: -* Join highway -* Leave highway -* RoadOption: - * CHANGELANELEFT - * CHANGELANERIGHT - * KEEPLANE -* avoid obstacle(doors, static objects) -* give way to emergency vehicle -* overtake slow moving vehicle -* leave a parking bay +- Join highway +- Leave highway +- RoadOption: + - CHANGELANELEFT + - CHANGELANERIGHT + - KEEPLANE +- avoid obstacle(doors, static objects) +- give way to emergency vehicle +- overtake slow moving vehicle +- leave a parking bay ### DECIDE_LANE_CHANGE Transition: -* From super state machine by above triggers +- From super state machine by above triggers From the super state machine the transition to change the lane is given by one of the above triggers. This state decides to which lane should be changed dependent on the trigger. It takes into account if there are lanes to the left and/or right and if the lane change is requested by a roadOption command. @@ -120,7 +119,7 @@ It takes into account if there are lanes to the left and/or right and if the lan Transition: -* From `DECIDE_LANE_CHANGE` by `RoadOption.CHANGELANELEFT` or `obstacle_in_lane` or `leader_vehicle_speed < LEADERTHRESHOLD` +- From `DECIDE_LANE_CHANGE` by `RoadOption.CHANGELANELEFT` or `obstacle_in_lane` or `leader_vehicle_speed < LEADERTHRESHOLD` This state performs a lane change to the lane on the left. @@ -134,8 +133,8 @@ If an obstacle or a slow leading vehicle are the reasons for the lane change, to Transition: -* From `DECIDE_LANE_CHANGE` by `RoadOption.CHANGELANERIGHT` or `emergency_vehicle_in_front` -* From `CHANGE_LANE_LEFT` by `passing_obstacle` or `slow_leading_vehicle` +- From `DECIDE_LANE_CHANGE` by `RoadOption.CHANGELANERIGHT` or `emergency_vehicle_in_front` +- From `CHANGE_LANE_LEFT` by `passing_obstacle` or `slow_leading_vehicle` For changing to the right lane it is assumed, that the traffic in this lane flows in the driving direction of the ego vehicle. @@ -147,7 +146,7 @@ The lane change should be performed if the lane is free and there are no fast mo Transition: -* From `driving state machine` by `intersection_detected` +- From `driving state machine` by `intersection_detected` This state machine handles the passing of an intersection. @@ -163,8 +162,8 @@ If there are is a traffic light or a stop sign at the intersection change to the Transition: -* From `STOP_SIGN/TRAFFIC SM` by `clearing the traffic light, stop sign` -* From `APPROACH_INTERSECTION` by `detecting an unsignalized and cleared intersection` +- From `STOP_SIGN/TRAFFIC SM` by `clearing the traffic light, stop sign` +- From `APPROACH_INTERSECTION` by `detecting an unsignalized and cleared intersection` After the approach of the intersection and clear a possible traffic light/stop sign, the ego vehicle enters the intersection. @@ -174,7 +173,7 @@ From there the RoadOption decides in which direction the ego vehicle should turn Transition: -* From `IN_INTERSECTION` by `RoadOption.LEFT` +- From `IN_INTERSECTION` by `RoadOption.LEFT` Check for pedestrians on the driving path. If the path is clear of pedestrians, make sure there will be no crashes during the turning process with oncoming traffic. @@ -182,7 +181,7 @@ Check for pedestrians on the driving path. If the path is clear of pedestrians, Transition: -* From `IN_INTERSECTION` by `RoadOption.STRAIGHT` +- From `IN_INTERSECTION` by `RoadOption.STRAIGHT` Check if there is a vehicle running a red light in the intersection. Pass the intersection. @@ -190,7 +189,7 @@ Check if there is a vehicle running a red light in the intersection. Pass the in Transition: -* From `IN_INTERSECTION` by `RoadOption.RIGHT` +- From `IN_INTERSECTION` by `RoadOption.RIGHT` Check for pedestrians on the driving path. If the path is clear of pedestrians, make sure there will be no crashes during the turning process with crossing traffic. @@ -198,7 +197,7 @@ Check for pedestrians on the driving path. If the path is clear of pedestrians, Transition: -* From `TURN_RIGHT`, `STRAIGHT` or `TURN_LEFT` by passing a distance from the intersection. +- From `TURN_RIGHT`, `STRAIGHT` or `TURN_LEFT` by passing a distance from the intersection. ## Stop sign/traffic light state machine @@ -206,7 +205,7 @@ Transition: Transition: -* From `APPROACH_INTERSECTION` by `stop_sign_detected or traffic_light_detected` +- From `APPROACH_INTERSECTION` by `stop_sign_detected or traffic_light_detected` This state machine handles the handling of stop signs and traffic lights. @@ -218,7 +217,7 @@ If the traffic light/stop sign is near, reduce speed. Avoid crashes with slowly Transitions: -* From `STOP_NEAR` if `distance greater braking distance`. +- From `STOP_NEAR` if `distance greater braking distance`. Slow down near the traffic light to be able to react to quick changes. @@ -226,10 +225,10 @@ Slow down near the traffic light to be able to react to quick changes. Transition: -* From `STOP_NEAR` if `distance < braking distance` while sensing a traffic_light that is `red` or `yellow` or a `stop sign` -* From `STOP_SLOW_DOWN` if `distance < braking distance` -* From `STOP_GO` if the traffic light changes from `green` to `yellow` or `red` and the ego vehicle can stop in front of the stop sign/traffic light. -* From `STOP_WAIT` if the there is a predominant stop sign and the ego vehicle didn't reach the stop line. +- From `STOP_NEAR` if `distance < braking distance` while sensing a traffic_light that is `red` or `yellow` or a `stop sign` +- From `STOP_SLOW_DOWN` if `distance < braking distance` +- From `STOP_GO` if the traffic light changes from `green` to `yellow` or `red` and the ego vehicle can stop in front of the stop sign/traffic light. +- From `STOP_WAIT` if the there is a predominant stop sign and the ego vehicle didn't reach the stop line. Stop in front of the traffic light or the stop sign. @@ -237,7 +236,7 @@ Stop in front of the traffic light or the stop sign. Transition: -* From `STOP_WILL_STOP` by either vehicle has stopped or distance to stop line is less than 2 meters +- From `STOP_WILL_STOP` by either vehicle has stopped or distance to stop line is less than 2 meters The vehicle has stopped and waits eiter until leading vehicle continues to drive or traffic rules permit to continue driving. @@ -245,9 +244,9 @@ The vehicle has stopped and waits eiter until leading vehicle continues to drive Transition: -* From `STOP_NEAR` if traffic light is `green` or `off` -* From `STOP_SLOW_DOWN` if traffic light is `green` or `off` -* FROM `STOP_WAIT` if traffic light is `green` or `off` +- From `STOP_NEAR` if traffic light is `green` or `off` +- From `STOP_SLOW_DOWN` if traffic light is `green` or `off` +- FROM `STOP_WAIT` if traffic light is `green` or `off` Ego vehicle starts to accelerate to clear the traffic sign/traffic light or continues to drive if the traffic light is green or deactivated. diff --git a/doc/03_research/03_planning/00_paf22/07_OpenDrive.md b/doc/03_research/03_planning/00_paf22/07_OpenDrive.md index 7c5c46fc..e25b8b60 100644 --- a/doc/03_research/03_planning/00_paf22/07_OpenDrive.md +++ b/doc/03_research/03_planning/00_paf22/07_OpenDrive.md @@ -15,21 +15,21 @@ Simon Erlbacher --- -* [OpenDrive Format](#opendrive-format) - * [Authors](#authors) - * [Date](#date) - * [General](#general) - * [Different Projects](#different-projects) - * [PSAF1](#psaf1) - * [PSAF2](#psaf2) - * [paf21-2](#paf21-2) - * [paf21-1](#paf21-1) - * [Result](#result) - * [More information about OpenDrive](#more-information-about-opendrive) - * [Start of the implementation](#start-of-the-implementation) - * [Implementation details](#implementation-details) - * [Follow-up Issues](#follow-up-issues) - * [Sources](#sources) +- [OpenDrive Format](#opendrive-format) + - [Authors](#authors) + - [Date](#date) + - [General](#general) + - [Different Projects](#different-projects) + - [PSAF1](#psaf1) + - [PSAF2](#psaf2) + - [paf21-2](#paf21-2) + - [paf21-1](#paf21-1) + - [Result](#result) + - [More information about OpenDrive](#more-information-about-opendrive) + - [Start of the implementation](#start-of-the-implementation) + - [Implementation details](#implementation-details) + - [Follow-up Issues](#follow-up-issues) + - [Sources](#sources) ## General @@ -45,33 +45,33 @@ It is examined how the OpenDrive file is converted and read in other groups and ### PSAF1 -* Subscribed the OpenDrive information from the Carla Simulator -* Used the Commonroad Route Planner from TUM (in the project they used the now deprecated verison) -* This Route Planner converts the xdor file from the CarlaWorldInfo message automatically -* As a result they used a Lanelet model, which they enriched with additional information about +- Subscribed the OpenDrive information from the Carla Simulator +- Used the Commonroad Route Planner from TUM (in the project they used the now deprecated verison) +- This Route Planner converts the xdor file from the CarlaWorldInfo message automatically +- As a result they used a Lanelet model, which they enriched with additional information about traffic lights and traffic signs -* This additional information comes from the Carla Simulator API +- This additional information comes from the Carla Simulator API Result: We can't use this information from [psaf1]("https://github.com/ll7/psaf1/tree/master/psaf_ros/psaf_global_planner") , because it is not allowed to use privileged information from the Carla Simulator ### PSAF2 -* Same approach as described in PSAF1 above -* Same problem in [psaf2](https://github.com/ll7/psaf2/tree/main/Planning/global_planner) with this approach as +- Same approach as described in PSAF1 above +- Same problem in [psaf2](https://github.com/ll7/psaf2/tree/main/Planning/global_planner) with this approach as mentioned in PSAF1 ### paf21-2 -* Same approach as described in PSAF1 above -* Same problem in [paf21-2](https://github.com/ll7/paf21-2#global-planner) with this approach as mentioned in PSAF1 +- Same approach as described in PSAF1 above +- Same problem in [paf21-2](https://github.com/ll7/paf21-2#global-planner) with this approach as mentioned in PSAF1 ### paf21-1 -* Worked directly with the OpenDrive format -* There is a lot of information available -* They extracted some information from the xdor file to plan their trajectory -* They don't recommend to use this approach, because a lot of "black magic" is happening in their code +- Worked directly with the OpenDrive format +- There is a lot of information available +- They extracted some information from the xdor file to plan their trajectory +- They don't recommend to use this approach, because a lot of "black magic" is happening in their code Result: The only possible way to get all the road information without using the Carla Simulator API @@ -83,17 +83,17 @@ during the planning process. It would be better to convert and analyse the xdor ## More information about OpenDrive -* We can read the xdor file with the [ElementTree XML API](https://docs.python.org/3/library/xml.etree.elementtree.html) -* We can refactor the scripts from paf21-1 but as they described, it is a lot of code and hard to get a good +- We can read the xdor file with the [ElementTree XML API](https://docs.python.org/3/library/xml.etree.elementtree.html) +- We can refactor the scripts from paf21-1 but as they described, it is a lot of code and hard to get a good overview about it -* Also we have a different scenario, because we do not need to read the whole xdor file in the beginning. We need +- Also we have a different scenario, because we do not need to read the whole xdor file in the beginning. We need to search for the relevant area -* The OpenDrive format contains a lot of information to extract - * Every road section has a unique id - * Road has a predecessor and a successor with its specific type (road, junction,...) - * Information about signals and their position - * Information about the reference lines (line which seperates lanes) and their layout (linear, arc, cubic curves) - * Information about the maximum speed +- The OpenDrive format contains a lot of information to extract + - Every road section has a unique id + - Road has a predecessor and a successor with its specific type (road, junction,...) + - Information about signals and their position + - Information about the reference lines (line which seperates lanes) and their layout (linear, arc, cubic curves) + - Information about the maximum speed ![OpenDrive stop sign](../../00_assets/Stop_sign_OpenDrive.png) Impression of the format @@ -108,22 +108,22 @@ After that, we can add some more information about the signals to our trajectory structure of the xodr files from the Simulator: -* header -* road (attributes: junction id (-1 if no junction), length, road id, Road name) - * lanes - * link (predecessor and successor with id) - * signals - * type (contains max speed) - * planView (contains information about the geometry and the line type (= reference line)) -* controller (information about the controlled signals) -* junction (crossing lane sections) +- header +- road (attributes: junction id (-1 if no junction), length, road id, Road name) + - lanes + - link (predecessor and successor with id) + - signals + - type (contains max speed) + - planView (contains information about the geometry and the line type (= reference line)) +- controller (information about the controlled signals) +- junction (crossing lane sections) link: -* every road has a successor and a predecessor road (sometimes only one of them) -* the road can have the type "road" or "junction" -* we can access the relevant sections with an id value -* Example: +- every road has a successor and a predecessor road (sometimes only one of them) +- the road can have the type "road" or "junction" +- we can access the relevant sections with an id value +- Example: @@ -132,10 +132,10 @@ link: planView: -* x and y world coordinates (startposition of the reference line) -* hdg value for the orientation -* length value for the length of this road section (reference line) -* reference line type: line, curvature (more possible in Asam OpenDrive) +- x and y world coordinates (startposition of the reference line) +- hdg value for the orientation +- length value for the length of this road section (reference line) +- reference line type: line, curvature (more possible in Asam OpenDrive) @@ -151,18 +151,18 @@ planView: lane: -* a lane is part of a road -* road can consists of different lanes -* the lane next to the reference line has the value 1 -* the lanes next to that lane have increasing numbers -* lanes on the left and on the right side of the reference line have different signs +- a lane is part of a road +- road can consists of different lanes +- the lane next to the reference line has the value 1 +- the lanes next to that lane have increasing numbers +- lanes on the left and on the right side of the reference line have different signs junction: -* a road section with crossing lanes -* a junction has one id -* every segment in the junction connects different lanes -* every connection has its own id +- a road section with crossing lanes +- a junction has one id +- every segment in the junction connects different lanes +- every connection has its own id @@ -176,16 +176,16 @@ junction: Relevant coordinate system: -* inertial coordinate system - * x -> right (roll) - * y -> up (pitch) - * z -> coming out of the drawig plane (yaw) +- inertial coordinate system + - x -> right (roll) + - y -> up (pitch) + - z -> coming out of the drawig plane (yaw) Driving direction: -* calculate on which road the agent drives -* that has an impact on the way we have to calculate the end points -* A road is decribed through the reference line. Every road segment has a +- calculate on which road the agent drives +- that has an impact on the way we have to calculate the end points +- A road is decribed through the reference line. Every road segment has a starting point and a length value. The distance to the following road segment. The calculation of the trajectory uses the startpoint of the next road segment to navigate along the street. If the agent drives on the other side of the street, @@ -198,25 +198,25 @@ the start points of the reference line There are two methods to calculate the trajectory. The first method is only needed once at the beginning, when the ego-vehicle stays at its start position. -* First we need to find the current road, where the agent is located -* Take all road start points and calculate the nearest startpoint to the vehicle position -* Calculate Endpoint for each connecting road and check if the vehicle lays in the interval -> road id - * use the predecessor and the successor points to get the correct road - * also check if the predecessor or successor is a junction. If do not have a command from the leaderboard we pass +- First we need to find the current road, where the agent is located +- Take all road start points and calculate the nearest startpoint to the vehicle position +- Calculate Endpoint for each connecting road and check if the vehicle lays in the interval -> road id + - use the predecessor and the successor points to get the correct road + - also check if the predecessor or successor is a junction. If do not have a command from the leaderboard we pass the junction straight. For this scenario we first have to filter the correct road id out ouf the junction to get the start and endpoint - * check if the ego vehicle lays in the interval -> if yes change the road id (else we chose the correct one) -* Check the driving direction (following road id) - * calculate the distances from one predecessor point and one successor point to the target point - * the road with the smaller distance is the next following road -* Interpolate the current road from start to end (arc and line) - * check the point ordering -> possible that we have to reverse them - * at the beginning we can be located in the middle of a street - * we need to delete the points from the interpolation laying before our ego vehicle position -* Weakness - * The Calculation of the driving direction is based on the distance to the target location - * If the course of the road is difficult, this approach could fail - * As you can see in the top right corner of the picture. the distance from the lower blue line + - check if the ego vehicle lays in the interval -> if yes change the road id (else we chose the correct one) +- Check the driving direction (following road id) + - calculate the distances from one predecessor point and one successor point to the target point + - the road with the smaller distance is the next following road +- Interpolate the current road from start to end (arc and line) + - check the point ordering -> possible that we have to reverse them + - at the beginning we can be located in the middle of a street + - we need to delete the points from the interpolation laying before our ego vehicle position +- Weakness + - The Calculation of the driving direction is based on the distance to the target location + - If the course of the road is difficult, this approach could fail + - As you can see in the top right corner of the picture. the distance from the lower blue line is shorter to the target than the upper blue line. The method would choose the lower line because of the smaller distance @@ -226,45 +226,45 @@ Road Concepts Further Calculation of the trajectory -* after each interpolation we calculate the midpoint of a lane. Otherwise we would drive on +- after each interpolation we calculate the midpoint of a lane. Otherwise we would drive on the reference line. That is why we have to filter the width information for our lanes. - * there can be more than one driving lane on one side of the reference line - * filter all width values and decide on which side of the reference line the vehicle drives - * after this we have the information which of the two perpendicular vectors we need to compute + - there can be more than one driving lane on one side of the reference line + - filter all width values and decide on which side of the reference line the vehicle drives + - after this we have the information which of the two perpendicular vectors we need to compute the points on the correct side of the reference line - * we always choose the biggest width value, to take the rightmost lane + - we always choose the biggest width value, to take the rightmost lane ![lane_midpoint](../../00_assets/lane_midpoint.png) Scenario and concept to compute the midpoint of a lane -* the second method takes the target position and the next command from the leaderboard -* we always calculate the follow road based on the distance to the target and then +- the second method takes the target position and the next command from the leaderboard +- we always calculate the follow road based on the distance to the target and then interpolate the current road - * here we can also change this approach if there is the same weakness as mentioned before - * we can calculate the next road based on the distance to the last trajectory point -* we have to keep in mind the same aspects as in the starting case -* after each interpolation of a road we check the distance from the new trajectory points to + - here we can also change this approach if there is the same weakness as mentioned before + - we can calculate the next road based on the distance to the last trajectory point +- we have to keep in mind the same aspects as in the starting case +- after each interpolation of a road we check the distance from the new trajectory points to the target position - * if the distance is smaller than a set threshold, we reached the target - * in this case we may need to calculate this last road again because based on the command + - if the distance is smaller than a set threshold, we reached the target + - in this case we may need to calculate this last road again because based on the command from the leaderboard we have to turn to the left side or the rigth side. We need to change the lane before we reach the startpoint of a junction - * we calculate the next road to take, based on the heading value of the endpoint of this + - we calculate the next road to take, based on the heading value of the endpoint of this following road. We compare this value to the yaw value from the leaderboard. The heading value with the smallest distance indicates the correct following road id. - * when we know the end point of the following road, we can recompute the last trajectory point + - when we know the end point of the following road, we can recompute the last trajectory point with all possible width values for this road. calculate the distance to the following endpoint and chose the width value with the smallest distance. - * Now we can interpolate our last road with the new width value (if the width value was updated) - * Also we can smooth our first trajectory points with smaller width values, to change the lane smooth + - Now we can interpolate our last road with the new width value (if the width value was updated) + - Also we can smooth our first trajectory points with smaller width values, to change the lane smooth For the next target point and command we need to call this method again (not the starting method) and calculate the trajectory. Weakness -* Offset for restricted areas is not yet calculated (see the picture above) -* no max speed value for junctions -> default value -* Check where the target points are located. In the middle of a junction or before? +- Offset for restricted areas is not yet calculated (see the picture above) +- no max speed value for junctions -> default value +- Check where the target points are located. In the middle of a junction or before? At the moment we assume they are before a junction. In the following test scenario we added a manual start point on road 8. @@ -288,15 +288,15 @@ One cutout of the trajectory ## Follow-up Issues -* Check out positioning - * Compare positioning of signs in Carla and in the OpenDrive Map - * Compare positioning of traffic lights in Carla and in the OpenDrive Map -* Visualize Trajectory in Carla -* Implement velocity profile -* Check if waypoints fit with Simulator -* Keep the lane limitation -> testing -* Extract signals information for the state machine -* Implement local path planner for alternative routes and collision prediction +- Check out positioning + - Compare positioning of signs in Carla and in the OpenDrive Map + - Compare positioning of traffic lights in Carla and in the OpenDrive Map +- Visualize Trajectory in Carla +- Implement velocity profile +- Check if waypoints fit with Simulator +- Keep the lane limitation -> testing +- Extract signals information for the state machine +- Implement local path planner for alternative routes and collision prediction ## Sources diff --git a/doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md b/doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md index a3fe7fb5..f6492d3c 100644 --- a/doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md +++ b/doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md @@ -16,28 +16,27 @@ Josef Kircher --- -* [Re-evaluation of decision making component](#re-evaluation-of-decision-making-component) - * [**Summary:** This page gives a foundation for the re-evaluation of the decision-making](#summary-this-page-gives-a-foundation-for-the-re-evaluation-of-the-decision-making) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Reasons for re-evaluation](#reasons-for-re-evaluation) - * [Options](#options) - * [Pylot](#pylot) - * [Pytrees](#pytrees) - * [Pros](#pros) - * [Cons](#cons) - * [Conclusion](#conclusion) - * [Sources](#sources) +- [Re-evaluation of decision making component](#re-evaluation-of-decision-making-component) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Reasons for re-evaluation](#reasons-for-re-evaluation) + - [Options](#options) + - [Pylot](#pylot) + - [Pytrees](#pytrees) + - [Pros](#pros) + - [Cons](#cons) + - [Conclusion](#conclusion) + - [Sources](#sources) ## Reasons for re-evaluation In the last sprint, I tried to get a graphic tool to work with the docker container withing the project. That failed, but I still think, that a graphical representation would be helpful. Other reasons are: -* not much time has been allocated for the state machine so far -* using SMACH would result in a mostly from scratch implementation -* harder to debug due to the lack of a graphic representation +- not much time has been allocated for the state machine so far +- using SMACH would result in a mostly from scratch implementation +- harder to debug due to the lack of a graphic representation ## Options @@ -56,18 +55,18 @@ As it is looking very promising, I list here a few arguments to help support my #### Pros -* support a graphical representation at runtime with rqt -* a lot of similar driving scenarios as the old team -* so a lot of code can be recycled -* quite intuitive and easy to understand -* only a limited amount of commands (easy to learn) -* well documented -* maintained +- support a graphical representation at runtime with rqt +- a lot of similar driving scenarios as the old team +- so a lot of code can be recycled +- quite intuitive and easy to understand +- only a limited amount of commands (easy to learn) +- well documented +- maintained #### Cons -* only a couple of decision can be made inside the tree, so it might be more complicated to depict the complex behaviour of the ego vehicle -* A lot of time was invested in the design of the original state machine, might be needed to be adapted +- only a couple of decision can be made inside the tree, so it might be more complicated to depict the complex behaviour of the ego vehicle +- A lot of time was invested in the design of the original state machine, might be needed to be adapted ## Conclusion diff --git a/doc/03_research/03_planning/Readme.md b/doc/03_research/03_planning/Readme.md index e48c2530..67c5f196 100644 --- a/doc/03_research/03_planning/Readme.md +++ b/doc/03_research/03_planning/Readme.md @@ -3,5 +3,5 @@ This folder contains all the results of research on planning from PAF 23 and 22. The research documents from the previous project were kept as they contain helpful information. The documents are separated in different folders: -* **[PAF22](./00_paf22/)** -* **[PAF23](./00_paf23/)** +- **[PAF22](./00_paf22/)** +- **[PAF23](./00_paf23/)** diff --git a/doc/03_research/04_requirements/02_informations_from_leaderboard.md b/doc/03_research/04_requirements/02_informations_from_leaderboard.md index 137d3566..25ce6b78 100644 --- a/doc/03_research/04_requirements/02_informations_from_leaderboard.md +++ b/doc/03_research/04_requirements/02_informations_from_leaderboard.md @@ -19,52 +19,52 @@ none --- -* [Requirements of Carla Leaderboard](#requirements-of-carla-leaderboard) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Task](#task) - * [Participation modalities](#participation-modalities) - * [Route format](#route-format) - * [Sensors](#sensors) - * [Evaluation](#evaluation) - * [Main score](#main-score) - * [Driving Score for route i](#driving-score-for-route-i) - * [Infraction penalty](#infraction-penalty) - * [Shutdown criteria](#shutdown-criteria) - * [Submission](#submission) - * [Sources](#sources) +- [Requirements of Carla Leaderboard](#requirements-of-carla-leaderboard) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Task](#task) + - [Participation modalities](#participation-modalities) + - [Route format](#route-format) + - [Sensors](#sensors) + - [Evaluation](#evaluation) + - [Main score](#main-score) + - [Driving score for single route](#driving-score-for-single-route) + - [Infraction penalty](#infraction-penalty) + - [Shutdown criteria](#shutdown-criteria) + - [Submission](#submission) + - [Sources](#sources) --- ## Task -* an autonomous agent should drive through a set of predefined routes -* for each route: - * initialization at a starting point - * directed to drive to a destination point - * route described by GPS coordinates **or** map coordinates **or** route instructions -* route situations: - * freeways - * urban areas - * residential districts - * rural settings -* weather conditions: - * daylight - * sunset - * rain - * fog - * night - * more ... +- an autonomous agent should drive through a set of predefined routes +- for each route: + - initialization at a starting point + - directed to drive to a destination point + - route described by GPS coordinates **or** map coordinates **or** route instructions +- route situations: + - freeways + - urban areas + - residential districts + - rural settings +- weather conditions: + - daylight + - sunset + - rain + - fog + - night + - more ... Possible traffic signs (not complete): -* Stop sign -* Speed limitation -* Traffic lights -* Arrows on street -* Stop sign on street +- Stop sign +- Speed limitation +- Traffic lights +- Arrows on street +- Stop sign on street ## Participation modalities @@ -100,12 +100,12 @@ Second, world coordinates and a route option High-level commands (rood options) are: -* RoadOption.**CHANGELANELEFT**: Move one lane to the left. -* RoadOption.**CHANGELANERIGHT**: Move one lane to the right. -* RoadOption.**LANEFOLLOW**: Continue in the current lane. -* RoadOption.**LEFT**: Turn left at the intersection. -* RoadOption.**RIGHT**: Turn right at the intersection. -* RoadOption.**STRAIGHT**: Keep straight at the intersection. +- RoadOption.**CHANGELANELEFT**: Move one lane to the left. +- RoadOption.**CHANGELANERIGHT**: Move one lane to the right. +- RoadOption.**LANEFOLLOW**: Continue in the current lane. +- RoadOption.**LEFT**: Turn left at the intersection. +- RoadOption.**RIGHT**: Turn right at the intersection. +- RoadOption.**STRAIGHT**: Keep straight at the intersection. **Important:** If the semantics of left and right are ambiguous, the next position should be used to clarify the path. @@ -131,9 +131,9 @@ Determination how "good" the agent performs on the Leaderboard. The driving proficiency of an agent can be characterized by multiple metrics. -* `Driving score:` Product between route completion and infractions penalty -* `Route completion:` Percentage of the route distance completed by an agent -* `Infraction penalty:` The leaderboard tracks several types of infractions which reduce the score +- `Driving score:` Product between route completion and infractions penalty +- `Route completion:` Percentage of the route distance completed by an agent +- `Infraction penalty:` The leaderboard tracks several types of infractions which reduce the score Every agent starts with a base infraction score of 1.0 at the beginning. @@ -147,36 +147,36 @@ Product of route completion and infraction penalty of this route Not complying with traffic rules will result in a penalty. Multiple penalties can be applied per route. Infractions ordered by severity are: -* collisions with pedestrians: 0.50 -* collisions with other vehicles: 0.60 -* collisions with static elements: 0.65 -* running a red light: 0.70 -* running a stop sign: 0.80 +- collisions with pedestrians: 0.50 +- collisions with other vehicles: 0.60 +- collisions with static elements: 0.65 +- running a red light: 0.70 +- running a stop sign: 0.80 It is possible that the vehicle is stuck in some scenario. After a timeout of **4 minutes** the vehicle will be released, however a penalty is applied -* scenario timeout (feature behaviours can block ego vehicle): 0.70 +- scenario timeout (feature behaviours can block ego vehicle): 0.70 Agent should keep a minimum speed compared to the nearby traffic. The penalty is increases with the difference in speed. -* Failure to maintain minimum speed: 0.70 +- Failure to maintain minimum speed: 0.70 Agent should let emergency vehicles from behind pass. -* Failure to yield to emergency vehicle: 0.70 +- Failure to yield to emergency vehicle: 0.70 If the agent drives off-road that percentage does not count towards the road completion -* Off-road driving: not considered towards the computation of the route completion score +- Off-road driving: not considered towards the computation of the route completion score ### Shutdown criteria Some events will interrupt the simulation of that resulting in an incomplete route -* route deviation - more than 30 meters from assigned route -* agent blocked - if agent does not take an action for 180 seconds -* simulation timeout - no client-server communication in 60 seconds -* route timeout - simulation takes too long to finish +- route deviation - more than 30 meters from assigned route +- agent blocked - if agent does not take an action for 180 seconds +- simulation timeout - no client-server communication in 60 seconds +- route timeout - simulation takes too long to finish ## Submission diff --git a/doc/03_research/04_requirements/03_requirements.md b/doc/03_research/04_requirements/03_requirements.md index 9f8755ab..953bd900 100644 --- a/doc/03_research/04_requirements/03_requirements.md +++ b/doc/03_research/04_requirements/03_requirements.md @@ -16,24 +16,23 @@ Josef Kircher, Simon Erlbacher --- -* [Requirements](#requirements) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [Requirements from Leaderboard tasks](#requirements-from-leaderboard-tasks) - * [Carla Leaderboard Score](#carla-leaderboard-score) - * [Prioritized driving aspects](#prioritized-driving-aspects) - * [more Content](#more-content) - * [Sources](#sources) +- [Requirements](#requirements) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [Requirements from Leaderboard tasks](#requirements-from-leaderboard-tasks) + - [Prioritized driving aspects](#prioritized-driving-aspects) + - [more Content](#more-content) + - [Sources](#sources) ## Requirements from Leaderboard tasks -* follow waypoints on a route -* don't deviate from route by more than 30 meters -* act in accordance with traffic rules -* don't get blocked -* complete 10 routes (2 weather conditions) +- follow waypoints on a route +- don't deviate from route by more than 30 meters +- act in accordance with traffic rules +- don't get blocked +- complete 10 routes (2 weather conditions) --- @@ -45,33 +44,33 @@ Also, it is appropriate to implement the basic features of an autonomous car fir `Very important:` -* Recognize the street limitations -* Recognize pedestrians -* Follow the waypoints -* Recognize traffic lights -* Recognize obstacles -* Recognize cars in front of the agent (keep distance) -* Steering, accelerate, decelerate -* Street rules (no street signs available) -* Change lane (obstacles) +- Recognize the street limitations +- Recognize pedestrians +- Follow the waypoints +- Recognize traffic lights +- Recognize obstacles +- Recognize cars in front of the agent (keep distance) +- Steering, accelerate, decelerate +- Street rules (no street signs available) +- Change lane (obstacles) `Important:` -* Check Intersection -* Sense traffic (speed and trajectory) -* Predict traffic -* Emergency brake -* Sense length of ramp -* Recognize space (Turn into highway) -* Change lane (safe) -* Recognize emergency vehicle -* Recognize unexpected dynamic situations (opening door, bycicles,...) +- Check Intersection +- Sense traffic (speed and trajectory) +- Predict traffic +- Emergency brake +- Sense length of ramp +- Recognize space (Turn into highway) +- Change lane (safe) +- Recognize emergency vehicle +- Recognize unexpected dynamic situations (opening door, bycicles,...) `Less important:` -* Smooth driving (accelerate, decelerate, stop) -* Weather Condition -* Predict pedestrians +- Smooth driving (accelerate, decelerate, stop) +- Weather Condition +- Predict pedestrians --- diff --git a/doc/03_research/04_requirements/04_use_cases.md b/doc/03_research/04_requirements/04_use_cases.md index 59cd6984..ee58d615 100644 --- a/doc/03_research/04_requirements/04_use_cases.md +++ b/doc/03_research/04_requirements/04_use_cases.md @@ -16,33 +16,163 @@ Josef Kircher --- -* [Use cases in Carla Leaderboard](#use-cases-in-carla-leaderboard) - * [Author](#author) - * [Date](#date) - * [Prerequisite](#prerequisite) - * [1. Control loss due to bad road condition](#1-control-loss-due-to-bad-road-condition) - * [2. Unprotected left turn at intersection with oncoming traffic](#2-unprotected-left-turn-at-intersection-with-oncoming-traffic) - * [3. Right turn at an intersection with crossing traffic](#3-right-turn-at-an-intersection-with-crossing-traffic) - * [4. Crossing negotiation at unsignalized intersection](#4-crossing-negotiation-at-unsignalized-intersection) - * [5. Crossing traffic running a red light at intersection](#5-crossing-traffic-running-a-red-light-at-intersection) - * [6. Highway merge from on-ramp](#6-highway-merge-from-on-ramp) - * [7. Highway cut-in from on-ramp](#7-highway-cut-in-from-on-ramp) - * [8. Static cut-in](#8-static-cut-in) - * [9. Highway exit](#9-highway-exit) - * [10. Yield to emergency vehicle](#10-yield-to-emergency-vehicle) - * [11. Obstacle in lane](#11-obstacle-in-lane) - * [12. Door Obstacle](#12-door-obstacle) - * [13. Slow moving hazard at lane edge](#13-slow-moving-hazard-at-lane-edge) - * [14. Vehicle invading lane on bend](#14-vehicle-invading-lane-on-bend) - * [15. Longitudinal control after leading vehicle brakes](#15-longitudinal-control-after-leading-vehicle-brakes) - * [16. Obstacle avoidance without prior action](#16-obstacle-avoidance-without-prior-action) - * [17. Pedestrian emerging from behind parked vehicle](#17-pedestrian-emerging-from-behind-parked-vehicle) - * [18. Obstacle avoidance with prior action](#18-obstacle-avoidance-with-prior-action) - * [19. Parking Cut-in](#19-parking-cut-in) - * [20. Lane changing to evade slow leading vehicle](#20-lane-changing-to-evade-slow-leading-vehicle) - * [21. Passing obstacle with oncoming traffic](#21-passing-obstacle-with-oncoming-traffic) - * [22. Parking Exit](#22-parking-exit) - * [Sources](#sources) +- [Use cases in Carla Leaderboard](#use-cases-in-carla-leaderboard) + - [Author](#author) + - [Date](#date) + - [Prerequisite](#prerequisite) + - [1. Control loss due to bad road condition](#1-control-loss-due-to-bad-road-condition) + - [Description](#description) + - [Pre-condition(Event)](#pre-conditionevent) + - [Driving functions](#driving-functions) + - [Outcome](#outcome) + - [Associated use cases](#associated-use-cases) + - [2. Unprotected left turn at intersection with oncoming traffic](#2-unprotected-left-turn-at-intersection-with-oncoming-traffic) + - [Description](#description-1) + - [Basic flow](#basic-flow) + - [Pre-condition(Event)](#pre-conditionevent-1) + - [Driving functions](#driving-functions-1) + - [Outcome](#outcome-1) + - [Associated use cases](#associated-use-cases-1) + - [3. Right turn at an intersection with crossing traffic](#3-right-turn-at-an-intersection-with-crossing-traffic) + - [Description](#description-2) + - [Basic flow](#basic-flow-1) + - [Pre-condition(Event)](#pre-conditionevent-2) + - [Driving functions](#driving-functions-2) + - [Outcome](#outcome-2) + - [Associated use cases](#associated-use-cases-2) + - [4. Crossing negotiation at unsignalized intersection](#4-crossing-negotiation-at-unsignalized-intersection) + - [Description](#description-3) + - [Basic flow](#basic-flow-2) + - [Pre-condition(Event)](#pre-conditionevent-3) + - [Driving functions](#driving-functions-3) + - [Outcome](#outcome-3) + - [Associated use cases](#associated-use-cases-3) + - [5. Crossing traffic running a red light at intersection](#5-crossing-traffic-running-a-red-light-at-intersection) + - [Description](#description-4) + - [Pre-condition(Event)](#pre-conditionevent-4) + - [Driving functions](#driving-functions-4) + - [Outcome](#outcome-4) + - [Associated use cases](#associated-use-cases-4) + - [6. Highway merge from on-ramp](#6-highway-merge-from-on-ramp) + - [Description](#description-5) + - [Basic flow](#basic-flow-3) + - [Pre-condition(Event)](#pre-conditionevent-5) + - [Driving functions](#driving-functions-5) + - [Outcome](#outcome-5) + - [Associated use cases](#associated-use-cases-5) + - [7. Highway cut-in from on-ramp](#7-highway-cut-in-from-on-ramp) + - [Description](#description-6) + - [Basic flow](#basic-flow-4) + - [Pre-condition(Event)](#pre-conditionevent-6) + - [Driving functions](#driving-functions-6) + - [Outcome](#outcome-6) + - [Associated use cases](#associated-use-cases-6) + - [8. Static cut-in](#8-static-cut-in) + - [Description](#description-7) + - [Basic flow](#basic-flow-5) + - [Pre-condition(Event)](#pre-conditionevent-7) + - [Driving functions](#driving-functions-7) + - [Outcome](#outcome-7) + - [Associated use cases](#associated-use-cases-7) + - [9. Highway exit](#9-highway-exit) + - [Description](#description-8) + - [Basic flow](#basic-flow-6) + - [Pre-condition(Event)](#pre-conditionevent-8) + - [Driving functions](#driving-functions-8) + - [Outcome](#outcome-8) + - [Associated use cases](#associated-use-cases-8) + - [10. Yield to emergency vehicle](#10-yield-to-emergency-vehicle) + - [Description](#description-9) + - [Basic flow](#basic-flow-7) + - [Pre-condition(Event)](#pre-conditionevent-9) + - [Driving functions](#driving-functions-9) + - [Outcome](#outcome-9) + - [Associated use cases](#associated-use-cases-9) + - [11. Obstacle in lane](#11-obstacle-in-lane) + - [Description](#description-10) + - [Basic flow](#basic-flow-8) + - [Pre-condition(Event)](#pre-conditionevent-10) + - [Driving functions](#driving-functions-10) + - [Outcome](#outcome-10) + - [Associated use cases](#associated-use-cases-10) + - [12. Door Obstacle](#12-door-obstacle) + - [Description](#description-11) + - [Basic flow](#basic-flow-9) + - [Pre-condition(Event)](#pre-conditionevent-11) + - [Driving functions](#driving-functions-11) + - [Outcome](#outcome-11) + - [Associated use cases](#associated-use-cases-11) + - [13. Slow moving hazard at lane edge](#13-slow-moving-hazard-at-lane-edge) + - [Description](#description-12) + - [Basic flow](#basic-flow-10) + - [Pre-condition(Event)](#pre-conditionevent-12) + - [Driving functions](#driving-functions-12) + - [Outcome](#outcome-12) + - [Associated use cases](#associated-use-cases-12) + - [14. Vehicle invading lane on bend](#14-vehicle-invading-lane-on-bend) + - [Description](#description-13) + - [Basic flow](#basic-flow-11) + - [Pre-condition(Event)](#pre-conditionevent-13) + - [Driving functions](#driving-functions-13) + - [Outcome](#outcome-13) + - [Associated use cases](#associated-use-cases-13) + - [15. Longitudinal control after leading vehicle brakes](#15-longitudinal-control-after-leading-vehicle-brakes) + - [Description](#description-14) + - [Basic flow](#basic-flow-12) + - [Pre-condition(Event)](#pre-conditionevent-14) + - [Driving functions](#driving-functions-14) + - [Outcome](#outcome-14) + - [Associated use cases](#associated-use-cases-14) + - [16. Obstacle avoidance without prior action](#16-obstacle-avoidance-without-prior-action) + - [Description](#description-15) + - [Basic flow](#basic-flow-13) + - [Pre-condition(Event)](#pre-conditionevent-15) + - [Driving functions](#driving-functions-15) + - [Outcome](#outcome-15) + - [Associated use cases](#associated-use-cases-15) + - [17. Pedestrian emerging from behind parked vehicle](#17-pedestrian-emerging-from-behind-parked-vehicle) + - [Description](#description-16) + - [Basic flow](#basic-flow-14) + - [Pre-condition(Event)](#pre-conditionevent-16) + - [Driving functions](#driving-functions-16) + - [Outcome](#outcome-16) + - [Associated use cases](#associated-use-cases-16) + - [18. Obstacle avoidance with prior action](#18-obstacle-avoidance-with-prior-action) + - [Description](#description-17) + - [Basic flow](#basic-flow-15) + - [Pre-condition(Event)](#pre-conditionevent-17) + - [Driving functions](#driving-functions-17) + - [Outcome](#outcome-17) + - [Associated use cases](#associated-use-cases-17) + - [19. Parking Cut-in](#19-parking-cut-in) + - [Description](#description-18) + - [Basic flow](#basic-flow-16) + - [Pre-condition(Event)](#pre-conditionevent-18) + - [Driving functions](#driving-functions-18) + - [Outcome](#outcome-18) + - [Associated use cases](#associated-use-cases-18) + - [20. Lane changing to evade slow leading vehicle](#20-lane-changing-to-evade-slow-leading-vehicle) + - [Description](#description-19) + - [Basic flow](#basic-flow-17) + - [Pre-condition(Event)](#pre-conditionevent-19) + - [Driving functions](#driving-functions-19) + - [Outcome](#outcome-19) + - [Associated use cases](#associated-use-cases-19) + - [21. Passing obstacle with oncoming traffic](#21-passing-obstacle-with-oncoming-traffic) + - [Description](#description-20) + - [Basic flow](#basic-flow-18) + - [Pre-condition(Event)](#pre-conditionevent-20) + - [Driving functions](#driving-functions-20) + - [Outcome](#outcome-20) + - [Associated use cases](#associated-use-cases-20) + - [22. Parking Exit](#22-parking-exit) + - [Description](#description-21) + - [Basic flow](#basic-flow-19) + - [Pre-condition(Event)](#pre-conditionevent-21) + - [Driving functions](#driving-functions-21) + - [Outcome](#outcome-21) + - [Associated use cases](#associated-use-cases-21) + - [Sources](#sources) --- @@ -61,9 +191,9 @@ Loss of control ### Driving functions -* Control steering angle, throttle and brake to counter unexpected movements +- Control steering angle, throttle and brake to counter unexpected movements -* (Opt): Sense wheel friction to predict unexpected behaviour +- (Opt): Sense wheel friction to predict unexpected behaviour ### Outcome @@ -98,13 +228,13 @@ Global route wants you to perform a left turn at an intersection ### Driving functions -* Sense street signs and traffic lights -* Observe the intersection -* Sense oncoming traffic -* (Check indicator of oncoming traffic) -* Sense pedestrians in your drive path -* Steer the vehicle in a left turn -* Predict if a turn is possible before oncoming traffic reaches the intersection +- Sense street signs and traffic lights +- Observe the intersection +- Sense oncoming traffic +- (Check indicator of oncoming traffic) +- Sense pedestrians in your drive path +- Steer the vehicle in a left turn +- Predict if a turn is possible before oncoming traffic reaches the intersection ### Outcome @@ -144,13 +274,13 @@ Global route wants you to perform a right turn at an intersection ### Driving functions -* Sense street signs and traffic lights -* Observe the intersection -* Sense crossing traffic -* Check indicator of crossing traffic -* Sense pedestrians in your drive path -* Steer the vehicle in a right turn -* Predict if a turn is possible before crossing traffic reaches the intersection +- Sense street signs and traffic lights +- Observe the intersection +- Sense crossing traffic +- Check indicator of crossing traffic +- Sense pedestrians in your drive path +- Steer the vehicle in a right turn +- Predict if a turn is possible before crossing traffic reaches the intersection ### Outcome @@ -192,10 +322,10 @@ No traffic lights or street signs are sensed and agent is at an intersection ### Driving functions -* Sense street signs and traffic lights -* Observe the intersection -* Sense pedestrians in your drive path -* Steering the vehicle +- Sense street signs and traffic lights +- Observe the intersection +- Sense pedestrians in your drive path +- Steering the vehicle ### Outcome @@ -225,10 +355,10 @@ Vehicle enters intersection while having a red light ### Driving functions -* Sense street signs and traffic lights -* Observe the intersection -* Sense crossing traffic -* Emergency brake +- Sense street signs and traffic lights +- Observe the intersection +- Sense crossing traffic +- Emergency brake ### Outcome @@ -269,10 +399,10 @@ Vehicle enters a highway ### Driving functions -* Sense speed of surrounding traffic -* Sense length of ramp -* Adjust speed to enter highway -* Turn into highway +- Sense speed of surrounding traffic +- Sense length of ramp +- Adjust speed to enter highway +- Turn into highway ### Outcome @@ -310,11 +440,11 @@ Vehicle enters a highway ### Driving functions -* Sense speed of surrounding traffic -* Adjust speed to let vehicle enter highway -* Change lane -* Decelerate -* Brake +- Sense speed of surrounding traffic +- Adjust speed to let vehicle enter highway +- Change lane +- Decelerate +- Brake ### Outcome @@ -352,11 +482,11 @@ Vehicle tries to cut-in ### Driving functions -* Sense speed of surrounding traffic -* Adjust speed to let vehicle enter lane -* Change lane -* Decelerate -* Brake +- Sense speed of surrounding traffic +- Adjust speed to let vehicle enter lane +- Change lane +- Decelerate +- Brake ### Outcome @@ -397,12 +527,12 @@ Vehicle leaves a highway ### Driving functions -* Sense speed of surrounding traffic -* Sense distance to off-ramp -* Adjust speed to change lane -* Change lane -* Decelerate -* Brake +- Sense speed of surrounding traffic +- Sense distance to off-ramp +- Adjust speed to change lane +- Change lane +- Decelerate +- Brake ### Outcome @@ -441,10 +571,10 @@ Emergency vehicle behind us ### Driving functions -* Sense emergency vehicle -* Sense speed of surrounding traffic -* Adjust speed to change lane -* Change lane +- Sense emergency vehicle +- Sense speed of surrounding traffic +- Adjust speed to change lane +- Change lane ### Outcome @@ -481,11 +611,11 @@ Obstacle on lane ### Driving functions -* Sense obstacles -* Sense speed of surrounding traffic -* Change lane -* Decelerate -* Brake +- Sense obstacles +- Sense speed of surrounding traffic +- Change lane +- Decelerate +- Brake ### Outcome @@ -536,11 +666,11 @@ Door opens in lane ### Driving functions -* Sense opening door -* Sense speed of surrounding traffic -* Change lane -* Decelerate -* Brake +- Sense opening door +- Sense speed of surrounding traffic +- Change lane +- Decelerate +- Brake ### Outcome @@ -585,11 +715,11 @@ slow moving hazard(bicycle) in lane ### Driving functions -* Sense slow moving hazards -* Sense speed of surrounding traffic -* Change lane -* Decelerate -* Brake +- Sense slow moving hazards +- Sense speed of surrounding traffic +- Change lane +- Decelerate +- Brake ### Outcome @@ -632,10 +762,10 @@ Bend in the road and a vehicle invading our lane ### Driving functions -* Sense vehicle on our lane -* Decelerate -* Brake -* Move to right part of lane +- Sense vehicle on our lane +- Decelerate +- Brake +- Move to right part of lane ### Outcome @@ -667,10 +797,10 @@ Vehicle in front suddenly slows down ### Driving functions -* Sense vehicle on our lane -* Sense vehicle speed -* Decelerate -* Emergency-/Brake +- Sense vehicle on our lane +- Sense vehicle speed +- Decelerate +- Emergency-/Brake ### Outcome @@ -709,9 +839,9 @@ Obstacle in front suddenly appears ### Driving functions -* Sense obstacle on our lane -* Decelerate -* Emergency-/Brake +- Sense obstacle on our lane +- Decelerate +- Emergency-/Brake ### Outcome @@ -760,9 +890,9 @@ Pedestrian in front suddenly appears from behind a parked car. ### Driving functions -* Sense pedestrian on our lane -* Decelerate -* Emergency-/Brake +- Sense pedestrian on our lane +- Decelerate +- Emergency-/Brake ### Outcome @@ -803,9 +933,9 @@ Obstacle in planned driving path ### Driving functions -* Sense obstacle in driving path -* Decelerate -* Emergency-/Brake +- Sense obstacle in driving path +- Decelerate +- Emergency-/Brake ### Outcome @@ -842,9 +972,9 @@ Parked car tries to join traffic ### Driving functions -* Sense parked car starts moving -* Decelerate -* Emergency-/Brake +- Sense parked car starts moving +- Decelerate +- Emergency-/Brake ### Outcome @@ -883,11 +1013,11 @@ Speed of car is under a certain threshold ### Driving functions -* Sense speed of traffic -* Sense vehicles in surrounding lanes -* Decelerate -* Emergency-/Brake -* Change lane +- Sense speed of traffic +- Sense vehicles in surrounding lanes +- Decelerate +- Emergency-/Brake +- Change lane ### Outcome @@ -929,14 +1059,14 @@ Obstacle in front of us with oncoming traffic ### Driving functions -* Sense obstacle -* Sense length of obstacle -* Sense speed, distance of oncoming traffic -* Sense vehicles in surrounding lanes -* Decelerate -* Brake -* Change lane -* Rejoin old lane after the obstacle +- Sense obstacle +- Sense length of obstacle +- Sense speed, distance of oncoming traffic +- Sense vehicles in surrounding lanes +- Decelerate +- Brake +- Change lane +- Rejoin old lane after the obstacle ### Outcome @@ -981,11 +1111,11 @@ Ego-vehicle is parked and wants to join traffic ### Driving functions -* Sense space of parking bay -* Sense speed, distance of traffic -* Sense vehicles in lane the agent wants to join -* Accelerate -* Change lane(Join traffic) +- Sense space of parking bay +- Sense speed, distance of traffic +- Sense vehicles in lane the agent wants to join +- Accelerate +- Change lane(Join traffic) ### Outcome diff --git a/doc/03_research/04_requirements/Readme.md b/doc/03_research/04_requirements/Readme.md index e45c90be..a2f40164 100644 --- a/doc/03_research/04_requirements/Readme.md +++ b/doc/03_research/04_requirements/Readme.md @@ -2,6 +2,6 @@ This folder contains all the results of our research on requirements: -* [Leaderboard information](./02_informations_from_leaderboard.md) -* [Reqirements for agent](./03_requirements.md) -* [Use case scenarios](./04_use_cases.md) +- [Leaderboard information](./02_informations_from_leaderboard.md) +- [Reqirements for agent](./03_requirements.md) +- [Use case scenarios](./04_use_cases.md) diff --git a/doc/03_research/Readme.md b/doc/03_research/Readme.md index f4948302..04591a65 100644 --- a/doc/03_research/Readme.md +++ b/doc/03_research/Readme.md @@ -4,7 +4,7 @@ This folder contains every research we did before we started the project. The research is structured in the following folders: -* [Acting](./01_acting/Readme.md) -* [Perception](./02_perception/Readme.md) -* [Planning](./03_planning/Readme.md) -* [Requirements](./04_requirements/Readme.md) +- [Acting](./01_acting/Readme.md) +- [Perception](./02_perception/Readme.md) +- [Planning](./03_planning/Readme.md) +- [Requirements](./04_requirements/Readme.md) diff --git a/doc/06_perception/02_dataset_structure.md b/doc/06_perception/02_dataset_structure.md index 24a4d8e0..aecd0a40 100644 --- a/doc/06_perception/02_dataset_structure.md +++ b/doc/06_perception/02_dataset_structure.md @@ -13,15 +13,15 @@ Marco Riedenauer 19.02.2023 -* [Dataset structure](#dataset-structure) - * [Author](#author) - * [Date](#date) - * [Converting the dataset](#converting-the-dataset) - * [Preparation of the dataset for training](#preparation-of-the-dataset-for-training) - * [Explanation of the conversion of groundtruth images](#explanation-of-the-conversion-of-groundtruth-images) - * [Things](#things) - * [Stuff](#stuff) - * [Explanation of creating json files](#explanation-of-creating-json-files) +- [Dataset structure](#dataset-structure) + - [Author](#author) + - [Date](#date) + - [Converting the dataset](#converting-the-dataset) + - [Preparation of the dataset for training](#preparation-of-the-dataset-for-training) + - [Explanation of the conversion of groundtruth images](#explanation-of-the-conversion-of-groundtruth-images) + - [Things](#things) + - [Stuff](#stuff) + - [Explanation of creating json files](#explanation-of-creating-json-files) ## Converting the dataset @@ -64,7 +64,7 @@ following structure: When the dataset has the correct structure, the groundtruth images have to be converted to COCO format and some json files have to be created. -To do so, execute the following command in your b5 shell: +To do so, execute the following command in an attached shell: ```shell python3 perception/src/panoptic_segmentation/preparation/createPanopticImgs.py --dataset_folder diff --git a/doc/06_perception/03_lidar_distance_utility.md b/doc/06_perception/03_lidar_distance_utility.md index 2d68e6f1..f81d2904 100644 --- a/doc/06_perception/03_lidar_distance_utility.md +++ b/doc/06_perception/03_lidar_distance_utility.md @@ -24,11 +24,11 @@ Tim Dreier --- -* [Lidar Distance Utility](#lidar-distance-utility) - * [Author](#author) - * [Date](#date) - * [Configuration](#configuration) - * [Example](#example) +- [Lidar Distance Utility](#lidar-distance-utility) + - [Author](#author) + - [Date](#date) + - [Configuration](#configuration) + - [Example](#example) ## Configuration diff --git a/doc/06_perception/04_efficientps.md b/doc/06_perception/04_efficientps.md index 4fa17b74..92ce43a4 100644 --- a/doc/06_perception/04_efficientps.md +++ b/doc/06_perception/04_efficientps.md @@ -15,14 +15,14 @@ Marco Riedenauer 28.03.2023 -* [EfficientPS](#efficientps) - * [Author](#author) - * [Date](#date) - * [Model Overview](#model-overview) - * [Training](#training) - * [Labels](#labels) - * [Training parameters](#training-parameters) - * [Train](#train) +- [EfficientPS](#efficientps) + - [Author](#author) + - [Date](#date) + - [Model Overview](#model-overview) + - [Training](#training) + - [Labels](#labels) + - [Training parameters](#training-parameters) + - [Train](#train) ## Model Overview @@ -35,13 +35,13 @@ case, since we used half the image size. ![EfficientPS Structure](../00_assets/efficientps_structure.png) [Source](https://arxiv.org/pdf/2004.02307.pdf) -* Feature Extraction: +- Feature Extraction: This is the first part of the model on which all following parts depend on. In this part, all important features are extracted from the input image. -* Semantic Segmentation Head: As the name implies, this part of the model computes a semantic segmentation on the +- Semantic Segmentation Head: As the name implies, this part of the model computes a semantic segmentation on the extracted features. -* Instance Segmentation Head: This part computes the instance segmentation on things on the extracted features. -* Panoptic Fusion: As the last part of the model, this component is responsible for combining the information gathered +- Instance Segmentation Head: This part computes the instance segmentation on things on the extracted features. +- Panoptic Fusion: As the last part of the model, this component is responsible for combining the information gathered by the semantic segmentation and the instance segmentation heads. The output of this component and thereby the model is an image where stuff is semantic segmented and things are instance segmented. @@ -64,19 +64,19 @@ All adaptable training parameters can be found and changed in The most important configs are: -* MODEL/ROI_HEADS/NUM_CLASSES: Number of instance classes -* DATASET_PATH: Path to dataset root -* TRAIN_JSON: Relative path from DATASET_PATH to train json file -* VALID_JSON: Relative path from DATASET_PATH to validation json file -* PRED_DIR: Directory to save predictions in -* PRED_JSON: Name of prediction json file -* CHECKPOINT_PATH: Path of already trained models you want to train furthermore -* BATCH_SIZE: Number of images to be loaded during on training step -* NUM_CLASSES: Number of all classes +- MODEL/ROI_HEADS/NUM_CLASSES: Number of instance classes +- DATASET_PATH: Path to dataset root +- TRAIN_JSON: Relative path from DATASET_PATH to train json file +- VALID_JSON: Relative path from DATASET_PATH to validation json file +- PRED_DIR: Directory to save predictions in +- PRED_JSON: Name of prediction json file +- CHECKPOINT_PATH: Path of already trained models you want to train furthermore +- BATCH_SIZE: Number of images to be loaded during on training step +- NUM_CLASSES: Number of all classes ### Train -To start the training, just execute the following command in b5 shell: +To start the training, just execute the following command in an attached shell: ```shell python3 perception/src/panoptic_segmentation/train_net.py From bf6d0b2e8cb42eea31a09d7e6ee470bd678c034b Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 8 Oct 2024 16:16:05 +0200 Subject: [PATCH 13/28] Removed precommit hooks & comlipy --- build/README.md | 34 +++++++++++---------- build/config-comlipy.yml | 23 -------------- build/docker/comlipy/Dockerfile | 8 ----- build/hooks/commit-msg | 6 ---- build/hooks/pre-commit | 18 ----------- build/linter_services.yaml | 6 ---- doc/02_development/03_commit.md | 43 --------------------------- doc/02_development/10_build_action.md | 13 -------- 8 files changed, 18 insertions(+), 133 deletions(-) delete mode 100644 build/config-comlipy.yml delete mode 100644 build/docker/comlipy/Dockerfile delete mode 100755 build/hooks/commit-msg delete mode 100755 build/hooks/pre-commit delete mode 100644 doc/02_development/03_commit.md diff --git a/build/README.md b/build/README.md index a3d8ff0d..7665c425 100644 --- a/build/README.md +++ b/build/README.md @@ -7,21 +7,24 @@ facilitating both normal and distributed execution modes. ## Table of Contents -- [Directory Structure](#directory-structure) -- [Base Service Files](#base-service-files) - - [`agent_service.yaml`](#agent_serviceyaml) - - [`carla-simulator_service.yaml`](#carla-simulator_serviceyaml) - - [`linter_services.yaml`](#linter_servicesyaml) - - [`roscore_service.yaml`](#roscore_serviceyaml) -- [Docker Compose Files](#docker-compose-files) - - [`docker-compose.yaml`](#docker-composeyaml) - - [`docker-compose_dev.yaml`](#docker-composedevyaml) - - [`docker-compose_cicd.yaml`](#docker-compose_cicdyaml) -- [Execution Modes](#execution-modes) - - [Normal Execution](#normal-execution) - - [Distributed Execution](#distributed-execution) -- [Usage](#usage) -- [Notes](#notes) +- [Build Directory Documentation](#build-directory-documentation) + - [Table of Contents](#table-of-contents) + - [Directory Structure](#directory-structure) + - [Base Service Files](#base-service-files) + - [`agent_service.yaml`](#agent_serviceyaml) + - [`carla-simulator_service.yaml`](#carla-simulator_serviceyaml) + - [`linter_services.yaml`](#linter_servicesyaml) + - [`roscore_service.yaml`](#roscore_serviceyaml) + - [Docker Compose Files](#docker-compose-files) + - [`docker-compose.yaml`](#docker-composeyaml) + - [`docker-compose_dev.yaml`](#docker-compose_devyaml) + - [`docker-compose_cicd.yaml`](#docker-compose_cicdyaml) + - [Execution Modes](#execution-modes) + - [Normal Execution](#normal-execution) + - [Distributed Execution](#distributed-execution) + - [Usage](#usage) + - [Notes](#notes) + - [Conclusion](#conclusion) ## Directory Structure @@ -65,7 +68,6 @@ Defines the configuration for the `carla-simulator` service, which runs the CARL Defines services for code linting and static analysis. Includes: - **flake8**: For Python linting. -- **comlipy**: Custom linting based on project requirements. - **mdlint**: For Markdown file linting. - **Volumes**: Mounts the project directory for linting files within the container. diff --git a/build/config-comlipy.yml b/build/config-comlipy.yml deleted file mode 100644 index 1eb36ffc..00000000 --- a/build/config-comlipy.yml +++ /dev/null @@ -1,23 +0,0 @@ -# comlipy config file (commit naming) -global: - help: 'Help: https://github.com/una-auxme/paf/blob/main/doc/02_development/03_commit.md' - -rules: - scope-min-length: - applicable: 'always' - value: 2 - level: 1 - type-enum: - applicable: 'always' - value: - - 'docs' - - 'feat' - - 'fix' - - 'other' - level: 1 - subject-case: - applicable: 'never' - value: - - 'upper-case' - level: 1 - diff --git a/build/docker/comlipy/Dockerfile b/build/docker/comlipy/Dockerfile deleted file mode 100644 index b98681f2..00000000 --- a/build/docker/comlipy/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -FROM python:3-alpine - -RUN pip install --no-cache-dir comlipy - -WORKDIR /apps - -ENTRYPOINT ["comlipy"] -CMD ["--help"] \ No newline at end of file diff --git a/build/hooks/commit-msg b/build/hooks/commit-msg deleted file mode 100755 index d0fb42e4..00000000 --- a/build/hooks/commit-msg +++ /dev/null @@ -1,6 +0,0 @@ -#!/usr/bin/env bash - -# get the commit message -commit_msg=$(<"${1:-}") - -b5 comlipy "$commit_msg" \ No newline at end of file diff --git a/build/hooks/pre-commit b/build/hooks/pre-commit deleted file mode 100755 index e8a0de65..00000000 --- a/build/hooks/pre-commit +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/sh -# Called by "git commit" with no arguments. The hook should -# exit with non-zero status after issuing an appropriate message if -# it wants to stop the commit. - -set -o errexit - -echo ############################################ -echo Starting git hooks -echo ############################################ - -for s in ./build/hooks/pre-commit.d/*.sh; do - . "./$s" -done - -echo ############################################ -echo Finished git hooks -echo ############################################ \ No newline at end of file diff --git a/build/linter_services.yaml b/build/linter_services.yaml index 3e386f88..0816aaab 100644 --- a/build/linter_services.yaml +++ b/build/linter_services.yaml @@ -5,12 +5,6 @@ services: volumes: - ../:/apps - comlipy: - build: docker/comlipy - command: . - volumes: - - ../:/apps - mdlint: image: peterdavehello/markdownlint:0.32.2 command: markdownlint . diff --git a/doc/02_development/03_commit.md b/doc/02_development/03_commit.md deleted file mode 100644 index b14789c2..00000000 --- a/doc/02_development/03_commit.md +++ /dev/null @@ -1,43 +0,0 @@ -# Git commit message conventions - -(Kept from previous group [paf22]) - -[Conventional Commits](https://www.conventionalcommits.org/) are enforced by [comlipy](https://gitlab.com/slashplus-build/comlipy/) during commit. The commit message should be structured as follows: - -```text -(optional scope): - -[optional body] - -[optional footer(s)] -``` - -(Example inspired by [https://www.conventionalcommits.org/](https://www.conventionalcommits.org/)) - -## Possible types - -| type | description | -|-------|------------------------------------| -| `docs` | Changes in documentation | -| `feat` | A new feature | -| `fix` | A bug fix | -| `other` | Anything else (should be avoided!) | - -## Possible scopes - -As scope we take the number of the issue the commit belongs to, prefixed by a #. - -## Example - -Some resulting commit message would be: - -```text -feat(#18): Added front left camera to vehicle -docs(#18): Added documentation about front left camera -``` - -## 🚨 Common Problems - -`Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:...` - -- Make sure your docker client is running diff --git a/doc/02_development/10_build_action.md b/doc/02_development/10_build_action.md index c72b6488..ec311b43 100644 --- a/doc/02_development/10_build_action.md +++ b/doc/02_development/10_build_action.md @@ -32,7 +32,6 @@ Tim Dreier, Korbinian Stein - [2. Set up Docker Buildx (`docker/setup-buildx-action@v2`)](#2-set-up-docker-buildx-dockersetup-buildx-actionv2) - [3. Log in to the Container registry (`docker/login-action@v2`)](#3-log-in-to-the-container-registry-dockerlogin-actionv2) - [4. Bump version and push tag (`mathieudutour/github-tag-action`)](#4-bump-version-and-push-tag-mathieudutourgithub-tag-action) - - [Example](#example) - [5. Get commit hash](#5-get-commit-hash) - [6. Build and push Docker image](#6-build-and-push-docker-image) - [The drive job](#the-drive-job) @@ -85,18 +84,6 @@ Example taken from [here](https://docs.github.com/en/actions/publishing-packages ### 4. Bump version and push tag ([`mathieudutour/github-tag-action`](https://github.com/mathieudutour/github-tag-action)) If the current commit is on the `main` branch, this action bumps the version and pushes a new tag to the repo. -Creates a new tag with a [semantic version](https://semver.org/) number for the release. -The version number is determinated by the name of the commits in the release. - -This is possible since [conventional commits](https://www.conventionalcommits.org/) are enforced by comlipy as -described [here](./02_linting.md). - -#### Example - -| Commit message | Release type | Previous version number | New version number | -|--------------------------------------------------------|---------------|-------------------------|--------------------| -| fix(#39): build failing due to incorrect configuration | Patch Release | 0.0.1 | 0.0.2 | -| feat(#39): Add automatic build process | Minor Release | 0.0.1 | 0.1.0 | Major releases can be done manually (e.g. `git tag v1.0.0`). From 8c043bb7b2dc2795ae4efaf0700d2a54e1ab0762 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 09:43:31 +0200 Subject: [PATCH 14/28] Renamed compose files for execution --- .vscode/extensions.json | 3 ++- .vscode/settings.json | 9 ++++++++- ..._service.yaml => docker-compose.carla-simulator.yaml} | 0 ...docker-compose_cicd.yaml => docker-compose.cicd.yaml} | 2 +- ...-compose_dev_offline.yaml => docker-compose.dev.yaml} | 0 ...ted.yaml => docker-compose.devroute-distributed.yaml} | 2 +- ...ker-compose_dev.yaml => docker-compose.devroute.yaml} | 4 ++-- ....yaml => docker-compose.leaderboard-distributed.yaml} | 2 +- ...cker-compose.yaml => docker-compose.leaderboard.yaml} | 4 ++-- .../{linter_services.yaml => docker-compose.linter.yaml} | 0 10 files changed, 17 insertions(+), 9 deletions(-) rename build/{carla-simulator_service.yaml => docker-compose.carla-simulator.yaml} (100%) rename build/{docker-compose_cicd.yaml => docker-compose.cicd.yaml} (94%) rename build/{docker-compose_dev_offline.yaml => docker-compose.dev.yaml} (100%) rename build/{docker-compose_dev_distributed.yaml => docker-compose.devroute-distributed.yaml} (92%) rename build/{docker-compose_dev.yaml => docker-compose.devroute.yaml} (88%) rename build/{docker-compose_distributed.yaml => docker-compose.leaderboard-distributed.yaml} (93%) rename build/{docker-compose.yaml => docker-compose.leaderboard.yaml} (84%) rename build/{linter_services.yaml => docker-compose.linter.yaml} (100%) diff --git a/.vscode/extensions.json b/.vscode/extensions.json index ed5f4b16..6e4fc554 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -10,6 +10,7 @@ "ms-python.flake8", "bierner.markdown-mermaid", "richardkotze.git-mob", - "ms-vscode-remote.remote-containers" + "ms-vscode-remote.remote-containers", + "valentjn.vscode-ltex" ] } \ No newline at end of file diff --git a/.vscode/settings.json b/.vscode/settings.json index b7873462..8d42024f 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -19,5 +19,12 @@ "query": "state:open repo:${owner}/${repository} sort:created-desc" } ], - "ltex.language": "en-US" + "ltex.language": "en-US", + "docker.commands.composeUp": [ + { + "label": "Compose Up", + "template": "xhost +local: && ${composeCommand} ${configurationFile} up" + } + ], + "workbench.iconTheme": "vscode-icons" } \ No newline at end of file diff --git a/build/carla-simulator_service.yaml b/build/docker-compose.carla-simulator.yaml similarity index 100% rename from build/carla-simulator_service.yaml rename to build/docker-compose.carla-simulator.yaml diff --git a/build/docker-compose_cicd.yaml b/build/docker-compose.cicd.yaml similarity index 94% rename from build/docker-compose_cicd.yaml rename to build/docker-compose.cicd.yaml index 7fd57177..2d7a47d5 100644 --- a/build/docker-compose_cicd.yaml +++ b/build/docker-compose.cicd.yaml @@ -2,7 +2,7 @@ include: # linter runs in a seperate workflow - roscore_service.yaml - - carla-simulator_service.yaml + - docker-compose.carla-simulator.yaml services: agent: diff --git a/build/docker-compose_dev_offline.yaml b/build/docker-compose.dev.yaml similarity index 100% rename from build/docker-compose_dev_offline.yaml rename to build/docker-compose.dev.yaml diff --git a/build/docker-compose_dev_distributed.yaml b/build/docker-compose.devroute-distributed.yaml similarity index 92% rename from build/docker-compose_dev_distributed.yaml rename to build/docker-compose.devroute-distributed.yaml index e8f8b127..e8ed7e9f 100644 --- a/build/docker-compose_dev_distributed.yaml +++ b/build/docker-compose.devroute-distributed.yaml @@ -1,6 +1,6 @@ # compose file for the development environment with distributed mode include: - - linter_services.yaml + - docker-compose.linter.yaml - roscore_service.yaml services: diff --git a/build/docker-compose_dev.yaml b/build/docker-compose.devroute.yaml similarity index 88% rename from build/docker-compose_dev.yaml rename to build/docker-compose.devroute.yaml index 1b9129de..6f04d601 100644 --- a/build/docker-compose_dev.yaml +++ b/build/docker-compose.devroute.yaml @@ -1,9 +1,9 @@ # compose file for the development environment # routes_simple.xml include: - - linter_services.yaml + - docker-compose.linter.yaml - roscore_service.yaml - - carla-simulator_service.yaml + - docker-compose.carla-simulator.yaml services: agent: diff --git a/build/docker-compose_distributed.yaml b/build/docker-compose.leaderboard-distributed.yaml similarity index 93% rename from build/docker-compose_distributed.yaml rename to build/docker-compose.leaderboard-distributed.yaml index cd600a02..464c1aa6 100644 --- a/build/docker-compose_distributed.yaml +++ b/build/docker-compose.leaderboard-distributed.yaml @@ -1,5 +1,5 @@ include: - - linter_services.yaml + - docker-compose.linter.yaml - roscore_service.yaml services: diff --git a/build/docker-compose.yaml b/build/docker-compose.leaderboard.yaml similarity index 84% rename from build/docker-compose.yaml rename to build/docker-compose.leaderboard.yaml index 44acaf62..93a669fc 100644 --- a/build/docker-compose.yaml +++ b/build/docker-compose.leaderboard.yaml @@ -1,7 +1,7 @@ include: - - linter_services.yaml + - docker-compose.linter.yaml - roscore_service.yaml - - carla-simulator_service.yaml + - docker-compose.carla-simulator.yaml services: agent: diff --git a/build/linter_services.yaml b/build/docker-compose.linter.yaml similarity index 100% rename from build/linter_services.yaml rename to build/docker-compose.linter.yaml From 0aece19578fff54f76aac175e5ece7afa7d5161a Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 09:46:29 +0200 Subject: [PATCH 15/28] Removed obsolete shell scripts --- dc-run-file.sh | 13 ------------- pc_setup_user.sh | 3 --- xhost_enable.sh | 4 ---- 3 files changed, 20 deletions(-) delete mode 100755 dc-run-file.sh delete mode 100755 xhost_enable.sh diff --git a/dc-run-file.sh b/dc-run-file.sh deleted file mode 100755 index 8f182b83..00000000 --- a/dc-run-file.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/bin/bash -# run docker compose file specified as argument and located in the build directory - -# enable xhost for the current user to allow docker to display graphics -./xhost_enable.sh - -# run docker compose -if [ $# -eq 0 ]; then - echo "Usage: $0 " - exit 1 -fi - -docker compose -f "$1" up diff --git a/pc_setup_user.sh b/pc_setup_user.sh index 7efd778d..23b931f9 100755 --- a/pc_setup_user.sh +++ b/pc_setup_user.sh @@ -2,6 +2,3 @@ cd mkdir git cd git git clone https://github.com/una-auxme/paf.git - -cd paf -./dc-run-file.sh build/docker-compose.yaml \ No newline at end of file diff --git a/xhost_enable.sh b/xhost_enable.sh deleted file mode 100755 index e80e040d..00000000 --- a/xhost_enable.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/bin/bash - -# enable xhost for the current user to allow docker to display graphics -xhost +local: \ No newline at end of file From ca4fdaee113ab9d4c0c603015dc64bbe36dabbb9 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 10:09:25 +0200 Subject: [PATCH 16/28] Updated build documentation --- build/README.md | 94 +++++++++++++++++++++++-------------------------- 1 file changed, 45 insertions(+), 49 deletions(-) diff --git a/build/README.md b/build/README.md index 7665c425..6a4e7861 100644 --- a/build/README.md +++ b/build/README.md @@ -12,13 +12,14 @@ facilitating both normal and distributed execution modes. - [Directory Structure](#directory-structure) - [Base Service Files](#base-service-files) - [`agent_service.yaml`](#agent_serviceyaml) - - [`carla-simulator_service.yaml`](#carla-simulator_serviceyaml) - - [`linter_services.yaml`](#linter_servicesyaml) - [`roscore_service.yaml`](#roscore_serviceyaml) - [Docker Compose Files](#docker-compose-files) - - [`docker-compose.yaml`](#docker-composeyaml) - - [`docker-compose_dev.yaml`](#docker-compose_devyaml) - - [`docker-compose_cicd.yaml`](#docker-compose_cicdyaml) + - [`docker-compose.carla-simulator.yaml`](#docker-composecarla-simulatoryaml) + - [`docker-compose.linter.yaml`](#docker-composelinteryaml) + - [`docker-compose.leaderboard.yaml`](#docker-composeleaderboardyaml) + - [`docker-compose.devroute.yaml`](#docker-composedevrouteyaml) + - [`docker-compose.dev.yaml`](#docker-composedevyaml) + - [`docker-compose.cicd.yaml`](#docker-composecicdyaml) - [Execution Modes](#execution-modes) - [Normal Execution](#normal-execution) - [Distributed Execution](#distributed-execution) @@ -32,13 +33,14 @@ The `build` directory contains the necessary configuration and setup files for b - **Base Service Files** - `agent_service.yaml` - - `carla-simulator_service.yaml` - - `linter_services.yaml` - `roscore_service.yaml` - **Docker Compose Files** - - `docker-compose.yaml` - - `docker-compose_dev.yaml` - - `docker-compose_cicd.yaml` + - `docker-compose.carla-simulator.yaml` + - `docker-compose.linter.yaml` + - `docker-compose.leaderboard.yaml` + - `docker-compose.devroute.yaml` + - `docker-compose.dev.yaml` + - `docker-compose.cicd.yaml` ## Base Service Files @@ -53,7 +55,20 @@ Defines the configuration for the `agent` service, which represents the autonomo - **Volumes**: Mounts directories like `/workspace` to share code and data between the host and the container. - **Networks**: Connects the agent to the `carla` and `ros` networks. -### `carla-simulator_service.yaml` +### `roscore_service.yaml` + +Defines the `roscore` service for running the ROS master node. Key configurations include: + +- **Image**: Uses the official ROS Noetic image. +- **Command**: Starts `roscore`. +- **Environment Variables**: Sets up ROS networking variables. +- **Networks**: Connects to the `ros` network. + +## Docker Compose Files + +The Docker Compose files allow the execution of different components or whole scenarios that include multiple services. + +### `docker-compose.carla-simulator.yaml` Defines the configuration for the `carla-simulator` service, which runs the CARLA simulator. Key configurations include: @@ -63,7 +78,7 @@ Defines the configuration for the `carla-simulator` service, which runs the CARL - **Volumes**: Shares the X11 UNIX socket and custom CARLA settings. - **Networks**: Connects to the `carla` network. -### `linter_services.yaml` +### `docker-compose.linter.yaml` Defines services for code linting and static analysis. Includes: @@ -71,35 +86,22 @@ Defines services for code linting and static analysis. Includes: - **mdlint**: For Markdown file linting. - **Volumes**: Mounts the project directory for linting files within the container. -### `roscore_service.yaml` - -Defines the `roscore` service for running the ROS master node. Key configurations include: - -- **Image**: Uses the official ROS Noetic image. -- **Command**: Starts `roscore`. -- **Environment Variables**: Sets up ROS networking variables. -- **Networks**: Connects to the `ros` network. - -## Docker Compose Files - -The Docker Compose files orchestrate multiple services defined in the base service files, allowing for different execution scenarios. - -### `docker-compose.yaml` +### `docker-compose.leaderboard.yaml` - **Includes**: - - `linter_services.yaml` + - `docker-compose.linter.yaml` + - `docker-compose.carla-simulator.yaml` - `roscore_service.yaml` - - `carla-simulator_service.yaml` - **Services**: - Extends the `agent` service from `agent_service.yaml`. - **Purpose**: Runs the agent with special scenarios included. Solving these scenarios is the primary goal of the project. -### `docker-compose_dev.yaml` +### `docker-compose.devroute.yaml` - **Includes**: - - `linter_services.yaml` + - `docker-compose.linter.yaml` + - `docker-compose.carla-simulator.yaml` - `roscore_service.yaml` - - `carla-simulator_service.yaml` - **Services**: - Extends the `agent` service from `agent_service.yaml`. - **Environment Overrides**: @@ -108,11 +110,17 @@ The Docker Compose files orchestrate multiple services defined in the base servi - Runs the agent with simplified settings suitable for development and testing. - **Purpose**: Provides a minimal setup for development without special scenarios. -### `docker-compose_cicd.yaml` +### `docker-compose.dev.yaml` + +- **Services**: + - Defines an `agent-dev` service using the corresponding Dockerfile. +- **Purpose**: Provides a container for attaching a VS Code instance for development. + +### `docker-compose.cicd.yaml` - **Includes**: + - `docker-compose.carla-simulator.yaml` - `roscore_service.yaml` - - `carla-simulator_service.yaml` - **Services**: - Defines an `agent` service using a prebuilt image from the project's container registry. - **Dependencies**: @@ -137,34 +145,22 @@ Distributed execution separates the agent and the CARLA simulator onto different - Running large vision models that require extensive VRAM. - The single machine's resources are insufficient to handle both the agent and simulator. -**Note**: In distributed execution, the CARLA simulator must be running on a second desktop PC, and the `CARLA_SIM_HOST` environment variable should be set accordingly. +**Note**: In distributed execution, the CARLA simulator must be running on a second desktop PC, and the `CARLA_SIM_HOST` environment variable should be set accordingly. Further information can be found in [here](../doc/02_development/14_distributed_simulation.md). ## Usage -To run the project using the provided Docker Compose files: - -- **Standard Execution with Special Scenarios**: - - ```bash - docker-compose -f build/docker-compose.yaml up - ``` - -- **Development Execution without Special Scenarios**: - - ```bash - docker-compose -f build/docker-compose_dev.yaml up - ``` +To run the project using the provided Docker Compose files simply navigate to the files in the VS Code Explorer and select `Compose Up` after right-clicking the file. - **CI/CD Execution**: - The `docker-compose_cicd.yaml` is intended to be used within CI/CD pipelines and may be invoked as part of automated scripts. + The `docker-compose.cicd.yaml` is intended to be used within CI/CD pipelines and may be invoked as part of automated scripts. ## Notes - Ensure that you have NVIDIA GPU support configured if running models that require GPU acceleration. - The `agent_service.yaml` and other base service files are crucial for defining the common configurations and should not be modified unless necessary. - When running in distributed mode, update the `CARLA_SIM_HOST` environment variable in the appropriate service configurations to point to the simulator's IP address. -- The linter services defined in `linter_services.yaml` can be used to maintain code quality and should be run regularly during development. +- The linter services defined in `docker-compose.linter.yaml` can be used to maintain code quality and should be run regularly during development. ## Conclusion From daa6369c299429b68eb529a991717ae56a7d0216 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 10:18:30 +0200 Subject: [PATCH 17/28] Fixed names of compose files --- .github/workflows/build.yml | 2 +- doc/02_development/14_distributed_simulation.md | 9 +++------ 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index f7e00e1b..fa07e68f 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -76,7 +76,7 @@ jobs: if: github.event_name == 'pull_request' env: AGENT_VERSION: ${{ needs.build-and-push-image.outputs.version }} - COMPOSE_FILE: ./build/docker-compose_cicd.yaml + COMPOSE_FILE: ./build/docker-compose.cicd.yaml steps: - name: Checkout repository uses: actions/checkout@v3 diff --git a/doc/02_development/14_distributed_simulation.md b/doc/02_development/14_distributed_simulation.md index 80dc7692..2d7cbe58 100644 --- a/doc/02_development/14_distributed_simulation.md +++ b/doc/02_development/14_distributed_simulation.md @@ -45,15 +45,12 @@ Typically, the ip address is the first one in the list. Replace the ip-address in the following files: -- `docker-compose.distributed.yml` -- `docker-compose.dev.distributed.yml` +- `build/docker-compose.devroute-distributed.yaml` +- `build/docker-compose.leaderboard-distributed.yaml` ### Start the agent on your local machine -```bash -docker compose -f build/docker-compose_distributed.yaml up -docker compose -f build/docker-compose_dev_distributed.yaml up -``` +Navigate to the files mentioned above in the VS Code Explorer and select `Compose Up` after right-clicking one of the files. ## How do you know that you do not have enough compute resources? From 622ff16c638b82423592ceb92c3320840d9143b9 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 14:06:38 +0200 Subject: [PATCH 18/28] Removed leading numbers from folders & files --- .flake8 | 4 +- README.md | 8 ++-- build/README.md | 2 +- code/acting/readme.md | 14 +++--- .../src/coordinate_transformation.py | 2 +- .../plots/data_26_MAE_Boxed.png | Bin .../plots/data_26_MSE_Boxed.png | Bin .../plots/data_26_RMSPE_Boxed.png | Bin .../Position_Heading_Datasets/viz.py | 2 +- code/perception/src/kalman_filter.py | 4 +- code/perception/src/lidar_distance.py | 2 +- .../src/position_heading_filter_debug_node.py | 8 ++-- .../src/position_heading_publisher_node.py | 2 +- .../src/traffic_light_detection/Readme.md | 2 +- .../behavior_agent/behaviours/maneuvers.py | 2 +- doc/03_research/01_acting/Readme.md | 11 ----- doc/03_research/02_perception/Readme.md | 12 ----- doc/03_research/04_requirements/Readme.md | 7 --- doc/03_research/Readme.md | 10 ---- doc/05_acting/Readme.md | 10 ---- doc/06_perception/Readme.md | 22 --------- doc/acting/Readme.md | 10 ++++ .../acting_testing.md} | 0 .../architecture_documentation.md} | 2 +- .../main_frame_publisher.md} | 2 +- .../steering_controllers.md} | 8 ++-- .../vehicle_controller.md} | 4 +- .../velocity_controller.md} | 8 ++-- doc/{00_assets => assets}/2_15_layover.png | Bin doc/{00_assets => assets}/2_layover.png | Bin doc/{00_assets => assets}/3_layover.png | Bin doc/{00_assets => assets}/3d_2d_formula.png | Bin .../3d_2d_projection.png | Bin doc/{00_assets => assets}/4_layover.png | Bin doc/{00_assets => assets}/Back_Detection.png | Bin doc/{00_assets => assets}/Comment_PR.png | Bin doc/{00_assets => assets}/Comment_viewed.png | Bin .../Commit_suggestion.png | Bin doc/{00_assets => assets}/Driving_SM.png | Bin doc/{00_assets => assets}/Files_Changed.png | Bin doc/{00_assets => assets}/Front_Detection.png | Bin doc/{00_assets => assets}/Global_Plan.png | Bin doc/{00_assets => assets}/Intersection_SM.png | Bin doc/{00_assets => assets}/Lane_Change_SM.png | Bin doc/{00_assets => assets}/Lanelets.png | Bin doc/{00_assets => assets}/Left_Detection.png | Bin doc/{00_assets => assets}/PR_overview.png | Bin .../Planning_Implementierung.png | Bin doc/{00_assets => assets}/Pycharm_PR.png | Bin .../Resolve_conversation.png | Bin doc/{00_assets => assets}/Review_changes.png | Bin doc/{00_assets => assets}/Right_Detection.png | Bin doc/{00_assets => assets}/Right_lane.png | Bin doc/{00_assets => assets}/Road0_cutout.png | Bin .../Stop_sign_OpenDrive.png | Bin doc/{00_assets => assets}/Suggestion.png | Bin doc/{00_assets => assets}/Super_SM.png | Bin doc/{00_assets => assets}/TR01.png | Bin doc/{00_assets => assets}/TR02.png | Bin doc/{00_assets => assets}/TR03.png | Bin doc/{00_assets => assets}/TR04.png | Bin doc/{00_assets => assets}/TR05.png | Bin doc/{00_assets => assets}/TR06.png | Bin doc/{00_assets => assets}/TR07.png | Bin doc/{00_assets => assets}/TR08.png | Bin doc/{00_assets => assets}/TR09.png | Bin doc/{00_assets => assets}/TR10.png | Bin doc/{00_assets => assets}/TR11.png | Bin doc/{00_assets => assets}/TR12.png | Bin doc/{00_assets => assets}/TR14.png | Bin doc/{00_assets => assets}/TR15.png | Bin doc/{00_assets => assets}/TR16.png | Bin doc/{00_assets => assets}/TR17.png | Bin doc/{00_assets => assets}/TR18.png | Bin doc/{00_assets => assets}/TR19.png | Bin doc/{00_assets => assets}/TR20.png | Bin doc/{00_assets => assets}/TR21.png | Bin doc/{00_assets => assets}/TR22.png | Bin doc/{00_assets => assets}/TR23.png | Bin doc/{00_assets => assets}/Traffic_SM.png | Bin .../acting/Architecture_Acting.png | Bin .../acting/Steering_PurePursuit.png | Bin .../acting/Steering_PurePursuit_Tuning.png | Bin .../acting/Steering_Stanley.png | Bin .../Steering_Stanley_ComparedToPurePur.png | Bin .../VelContr_PID_BrakingWithThrottlePID.png | Bin .../acting/VelContr_PID_StepResponse.png | Bin .../VelContr_PID_differentVelocities.png | Bin .../acting/emergency_brake_stats_graph.png | Bin .../acting/main_frame_publisher_bug.png | Bin doc/{00_assets => assets}/behaviour_tree.png | Bin .../berechnungsmodell.png | Bin doc/{00_assets => assets}/branch_overview.png | Bin doc/{00_assets => assets}/bug_template.png | Bin doc/{00_assets => assets}/create_issue.png | Bin .../distance_visualization.png | Bin .../efficientps_structure.png | Bin .../fahrzeugapproximation.png | Bin .../fahrzeugpositionsberechnung.png | Bin .../fahrzeugwinkelberechnung.png | Bin .../feature_template.png | Bin .../filter_img/avg_10_w_0_500.png | Bin .../filter_img/avg_10_w_0_750.png | Bin .../filter_img/avg_10_w_1_000.png | Bin .../filter_img/avg_1_w_0_500.png | Bin .../filter_img/avg_1_w_0_750.png | Bin .../filter_img/avg_1_w_1_000.png | Bin .../filter_img/avg_20_w_0_750.png | Bin .../filter_img/avg_7_w_0_500.png | Bin .../filter_img/avg_7_w_0_750.png | Bin .../filter_img/avg_7_w_1_000.png | Bin .../filter_img/rolling_avg_1.png | Bin .../filter_img/rolling_avg_10.png | Bin .../filter_img/rolling_avg_20.png | Bin .../filter_img/rolling_avg_5.png | Bin doc/{00_assets => assets}/gdrive-paf.png | Bin .../gdrive-permissions.png | Bin .../gewinnerteam19-architektur.png | Bin doc/{00_assets => assets}/git-flow.svg | 0 .../github-action-md.png | Bin .../github-action-py.png | Bin .../github_create_a_branch.png | Bin .../global_trajectory.png | Bin .../gnss_ohne_rolling_average.png | Bin .../implementation_plan_perception.jpg | Bin doc/{00_assets => assets}/intersection.png | Bin doc/{00_assets => assets}/intersection_2.png | Bin doc/{00_assets => assets}/issue_wizard.png | Bin doc/{00_assets => assets}/junction.png | Bin .../kollisionsberechnung.png | Bin doc/{00_assets => assets}/kreuzungszonen.png | Bin doc/{00_assets => assets}/lane_midpoint.png | Bin doc/{00_assets => assets}/leaderboard-1.png | Bin doc/{00_assets => assets}/leaderboard-2.png | Bin doc/{00_assets => assets}/legend_bt.png | Bin doc/{00_assets => assets}/lidar_filter.png | Bin .../lidarhinderniserkennung.png | Bin .../local_trajectory.png | Bin doc/{00_assets => assets}/multi_lane.png | Bin doc/{00_assets => assets}/nvcc_version.png | Bin doc/{00_assets => assets}/occupancygrid.png | Bin .../optimierungsvisualisierung.png | Bin .../overtaking_overview.png | Bin doc/{00_assets => assets}/overview.jpg | Bin .../adding_new_position_methods.png | Bin .../perception/data_26_MAE_Boxed.png | Bin .../perception/data_26_MSE_Boxed.png | Bin .../perception/kalman_installation_guide.png | Bin .../perception/modular_subscriber_example.png | Bin .../perception/new_heading_pub_example.png | Bin .../perception/non_linear_kalman_example.png | Bin .../perception/quat_to_angle.png | Bin .../perception/sensor_debug_change.png | Bin .../perception/sensor_debug_data_saving.png | Bin .../perception/sensor_debug_viz_config.png | Bin .../assets/planning \303\274bersicht.png" | Bin .../planning/BT_paper.png | Bin .../planning/BehaviorTree_medium.png | Bin .../planning/Globalplan.png | Bin .../planning/Overtake_car_trajectory.png | Bin .../planning/Planning.png | Bin .../planning/Planning_architecture.png | Bin .../planning/Planning_paf21.png | Bin .../planning/collision_check.png | Bin .../planning/intersection_scenario.png | Bin .../planning/localplan.png | Bin .../planning/overtaking_scenario.png | Bin .../planning/overview.jpg | Bin .../planning/overview.png | Bin .../planning/overview_paper1.png | Bin .../plot_full_trajectory_1_degree.png | Bin doc/{00_assets => assets}/planning/prios.png | Bin .../planning/simple_final_tree.png | Bin .../planning/test_frenet_results.png | Bin .../planning/three_scenarios.png | Bin .../planning/vector_calculation.png | Bin .../planning/vision_objects_filter_cc.png | Bin doc/{00_assets => assets}/positionsvektor.png | Bin .../preplanning_start.png | Bin .../pytree_PAF_status.drawio.png | Bin doc/{00_assets => assets}/reference.png | Bin doc/{00_assets => assets}/reference_xodr.png | Bin .../research_assets/bicyclegeometry.png | Bin .../research_assets/chattering.gif | Bin .../curve_detection_paf21_1.png | Bin .../danglingcarrotgeometry.png | Bin .../research_assets/messages_paf21_2.png | Bin .../research_assets/mpc.png | Bin .../research_assets/pure_pursuit.png | Bin .../standard_routine_paf21_2.png | Bin .../research_assets/stanley_controller.png | Bin .../research_assets/stanley_paf21_1.png | Bin .../research_assets/stanleyerror.png | Bin doc/{00_assets => assets}/road_option.png | Bin .../road_options_concept.png | Bin doc/{00_assets => assets}/roads_vis.png | Bin doc/{00_assets => assets}/segmentation.png | Bin doc/{00_assets => assets}/sensoranordnung.png | Bin doc/{00_assets => assets}/statemachines.png | Bin doc/{00_assets => assets}/top-level.png | Bin .../trajectory_roads.png | Bin .../trajekorienfehlermin.png | Bin .../trajektorienberechnung.png | Bin .../vulkan_device_not_available.png | Bin .../paf23/sprint_1.md | 0 .../paf23/sprint_2.md | 0 .../paf23/sprint_3.md | 0 .../paf23/sprint_4.md | 0 .../paf23/sprint_5.md | 0 .../paf23/sprint_6.md | 0 .../paf23/sprint_7.md | 0 .../paf24/mermaid_paf24.md | 0 .../paf24/student_roles24.md | 0 doc/{02_development => development}/Readme.md | 26 +++++------ .../build_action.md} | 0 .../coding_style.md} | 0 .../discord_webhook.md} | 0 .../distributed_simulation.md} | 0 .../documentation_requirements.md} | 6 +-- .../11_dvc.md => development/dvc.md} | 12 ++--- .../git_workflow.md} | 10 ++-- .../installing_cuda.md | 2 +- .../installing_python_packages.md} | 0 .../linter_action.md} | 8 ++-- .../02_linting.md => development/linting.md} | 0 .../project_management.md} | 12 ++--- .../review_guideline.md} | 18 +++---- .../templates/template_class.py | 0 .../templates/template_class_no_comments.py | 0 .../templates/template_component_readme.md | 0 .../templates/template_wiki_page.md | 0 .../templates/template_wiki_page_empty.md | 0 .../dvc_example/.gitignore | 0 .../dvc_example/dataset.dvc | 0 .../gps_example/gps_signal_example.md | 20 ++++---- doc/{01_general => general}/Readme.md | 4 +- .../architecture.md} | 32 ++++++------- .../installation.md} | 2 +- doc/perception/Readme.md | 22 +++++++++ .../coordinate_transformation.md} | 8 ++-- .../dataset_generator.md} | 0 .../dataset_structure.md} | 2 +- .../distance_to_objects.md} | 12 ++--- .../efficientps.md} | 4 +- .../experiments/README.md | 0 .../experiments/lanenet_evaluation/README.md | 0 .../lanenet_evaluation/assets/1600_lanes.jpg | Bin .../assets/1600_lanes_mask.jpg | Bin .../lanenet_evaluation/assets/1619_lanes.jpg | Bin .../assets/1619_lanes_mask.jpg | Bin .../lanenet_evaluation/assets/1660_lanes.jpg | Bin .../assets/1660_lanes_mask.jpg | Bin .../lanenet_evaluation/assets/1663_lanes.jpg | Bin .../assets/1663_lanes_mask.jpg | Bin .../README.md | 0 .../1619_PT_fasterrcnn_resnet50_fpn_v2.jpg | Bin .../asset-copies/1619_TF_faster-rcnn.jpg | Bin .../asset-copies/1619_yolo_nas_l.jpg | Bin .../asset-copies/1619_yolo_rtdetr_x.jpg | Bin .../asset-copies/1619_yolov8x.jpg | Bin .../asset-copies/1619_yolov8x_seg.jpg | Bin .../globals.py | 0 .../object-detection-model_evaluation/pt.py | 0 .../pylot.py | 0 .../requirements.txt | 0 .../object-detection-model_evaluation/yolo.py | 0 .../README.md | 0 .../assets/back_1.png | Bin .../assets/back_14.jpg | Bin .../assets/green_22.jpg | Bin .../assets/green_4.png | Bin .../assets/red_10.png | Bin .../assets/red_20.png | Bin .../assets/yellow_1.png | Bin .../assets/yellow_18.jpg | Bin .../kalman_filter.md} | 6 +-- .../lidar_distance_utility.md} | 2 +- .../position_heading_filter_debug_node.md} | 18 +++---- .../position_heading_publisher_node.md} | 12 ++--- .../traffic_light_detection.md} | 0 .../vision_node.md} | 24 +++++----- doc/{07_planning => planning}/ACC.md | 0 .../Behavior_tree.md | 8 ++-- .../Collision_Check.md | 0 .../Global_Planner.md | 0 .../Local_Planning.md | 14 +++--- doc/{07_planning => planning}/Preplanning.md | 14 +++--- doc/{07_planning => planning}/README.md | 6 +-- .../Unstuck_Behavior.md | 4 +- .../motion_planning.md | 0 .../01_py_trees.md => planning/py_trees.md} | 2 +- .../Leaderboard-2/changes_leaderboard2.md | 2 +- doc/research/Readme.md | 10 ++++ doc/research/acting/Readme.md | 11 +++++ .../acting/autoware_acting.md} | 0 .../acting/basics_acting.md} | 10 ++-- .../acting/implementation_acting.md} | 2 +- .../acting/paf21_1_acting.md} | 4 +- .../acting/paf21_2_and_pylot_acting.md} | 14 +++--- .../perception}/LIDAR_data.md | 2 +- doc/research/perception/Readme.md | 12 +++++ .../perception/Research_PAF21-Perception.md} | 0 .../perception/autoware-perception.md} | 0 .../perception/basics.md} | 0 .../perception/first_implementation_plan.md} | 4 +- .../perception/paf_21_1_perception.md} | 0 .../perception/pylot.md} | 0 .../planning}/Readme.md | 4 +- .../planning/paf22/Implementation.md} | 12 ++--- .../planning/paf22/Navigation_Data.md} | 0 .../planning/paf22/OpenDrive.md} | 14 +++--- .../planning/paf22/basics.md} | 30 ++++++------ .../planning/paf22/decision_making.md} | 0 .../paf22/reevaluation_desicion_making.md} | 0 .../planning/paf22/state_machine_design.md} | 10 ++-- .../Local_planning_for_first_milestone.md} | 12 ++--- .../planning/paf23/PlannedArchitecture.md} | 8 ++-- .../planning/paf23/Planning.md} | 4 +- .../planning/paf23/PlanningPaf22.md} | 2 +- .../paf23/Research_Pylot_Planning.md} | 0 .../Testing_frenet_trajectory_planner.md | 2 +- .../planning/paf23/paf21-1.md} | 2 +- .../planning/paf23}/test_traj.py | 0 doc/research/requirements/Readme.md | 7 +++ .../informations_from_leaderboard.md} | 0 .../requirements/requirements.md} | 2 +- .../requirements/use_cases.md} | 44 +++++++++--------- 327 files changed, 348 insertions(+), 344 deletions(-) rename code/perception/src/{00_Experiments => experiments}/Position_Heading_Datasets/plots/data_26_MAE_Boxed.png (100%) rename code/perception/src/{00_Experiments => experiments}/Position_Heading_Datasets/plots/data_26_MSE_Boxed.png (100%) rename code/perception/src/{00_Experiments => experiments}/Position_Heading_Datasets/plots/data_26_RMSPE_Boxed.png (100%) rename code/perception/src/{00_Experiments => experiments}/Position_Heading_Datasets/viz.py (99%) delete mode 100644 doc/03_research/01_acting/Readme.md delete mode 100644 doc/03_research/02_perception/Readme.md delete mode 100644 doc/03_research/04_requirements/Readme.md delete mode 100644 doc/03_research/Readme.md delete mode 100644 doc/05_acting/Readme.md delete mode 100644 doc/06_perception/Readme.md create mode 100644 doc/acting/Readme.md rename doc/{05_acting/05_acting_testing.md => acting/acting_testing.md} (100%) rename doc/{05_acting/01_architecture_documentation.md => acting/architecture_documentation.md} (98%) rename doc/{05_acting/06_main_frame_publisher.md => acting/main_frame_publisher.md} (94%) rename doc/{05_acting/03_steering_controllers.md => acting/steering_controllers.md} (93%) rename doc/{05_acting/04_vehicle_controller.md => acting/vehicle_controller.md} (95%) rename doc/{05_acting/02_velocity_controller.md => acting/velocity_controller.md} (91%) rename doc/{00_assets => assets}/2_15_layover.png (100%) rename doc/{00_assets => assets}/2_layover.png (100%) rename doc/{00_assets => assets}/3_layover.png (100%) rename doc/{00_assets => assets}/3d_2d_formula.png (100%) rename doc/{00_assets => assets}/3d_2d_projection.png (100%) rename doc/{00_assets => assets}/4_layover.png (100%) rename doc/{00_assets => assets}/Back_Detection.png (100%) rename doc/{00_assets => assets}/Comment_PR.png (100%) rename doc/{00_assets => assets}/Comment_viewed.png (100%) rename doc/{00_assets => assets}/Commit_suggestion.png (100%) rename doc/{00_assets => assets}/Driving_SM.png (100%) rename doc/{00_assets => assets}/Files_Changed.png (100%) rename doc/{00_assets => assets}/Front_Detection.png (100%) rename doc/{00_assets => assets}/Global_Plan.png (100%) rename doc/{00_assets => assets}/Intersection_SM.png (100%) rename doc/{00_assets => assets}/Lane_Change_SM.png (100%) rename doc/{00_assets => assets}/Lanelets.png (100%) rename doc/{00_assets => assets}/Left_Detection.png (100%) rename doc/{00_assets => assets}/PR_overview.png (100%) rename doc/{00_assets => assets}/Planning_Implementierung.png (100%) rename doc/{00_assets => assets}/Pycharm_PR.png (100%) rename doc/{00_assets => assets}/Resolve_conversation.png (100%) rename doc/{00_assets => assets}/Review_changes.png (100%) rename doc/{00_assets => assets}/Right_Detection.png (100%) rename doc/{00_assets => assets}/Right_lane.png (100%) rename doc/{00_assets => assets}/Road0_cutout.png (100%) rename doc/{00_assets => assets}/Stop_sign_OpenDrive.png (100%) rename doc/{00_assets => assets}/Suggestion.png (100%) rename doc/{00_assets => assets}/Super_SM.png (100%) rename doc/{00_assets => assets}/TR01.png (100%) rename doc/{00_assets => assets}/TR02.png (100%) rename doc/{00_assets => assets}/TR03.png (100%) rename doc/{00_assets => assets}/TR04.png (100%) rename doc/{00_assets => assets}/TR05.png (100%) rename doc/{00_assets => assets}/TR06.png (100%) rename doc/{00_assets => assets}/TR07.png (100%) rename doc/{00_assets => assets}/TR08.png (100%) rename doc/{00_assets => assets}/TR09.png (100%) rename doc/{00_assets => assets}/TR10.png (100%) rename doc/{00_assets => assets}/TR11.png (100%) rename doc/{00_assets => assets}/TR12.png (100%) rename doc/{00_assets => assets}/TR14.png (100%) rename doc/{00_assets => assets}/TR15.png (100%) rename doc/{00_assets => assets}/TR16.png (100%) rename doc/{00_assets => assets}/TR17.png (100%) rename doc/{00_assets => assets}/TR18.png (100%) rename doc/{00_assets => assets}/TR19.png (100%) rename doc/{00_assets => assets}/TR20.png (100%) rename doc/{00_assets => assets}/TR21.png (100%) rename doc/{00_assets => assets}/TR22.png (100%) rename doc/{00_assets => assets}/TR23.png (100%) rename doc/{00_assets => assets}/Traffic_SM.png (100%) rename doc/{00_assets => assets}/acting/Architecture_Acting.png (100%) rename doc/{00_assets => assets}/acting/Steering_PurePursuit.png (100%) rename doc/{00_assets => assets}/acting/Steering_PurePursuit_Tuning.png (100%) rename doc/{00_assets => assets}/acting/Steering_Stanley.png (100%) rename doc/{00_assets => assets}/acting/Steering_Stanley_ComparedToPurePur.png (100%) rename doc/{00_assets => assets}/acting/VelContr_PID_BrakingWithThrottlePID.png (100%) rename doc/{00_assets => assets}/acting/VelContr_PID_StepResponse.png (100%) rename doc/{00_assets => assets}/acting/VelContr_PID_differentVelocities.png (100%) rename doc/{00_assets => assets}/acting/emergency_brake_stats_graph.png (100%) rename doc/{00_assets => assets}/acting/main_frame_publisher_bug.png (100%) rename doc/{00_assets => assets}/behaviour_tree.png (100%) rename doc/{00_assets => assets}/berechnungsmodell.png (100%) rename doc/{00_assets => assets}/branch_overview.png (100%) rename doc/{00_assets => assets}/bug_template.png (100%) rename doc/{00_assets => assets}/create_issue.png (100%) rename doc/{00_assets => assets}/distance_visualization.png (100%) rename doc/{00_assets => assets}/efficientps_structure.png (100%) rename doc/{00_assets => assets}/fahrzeugapproximation.png (100%) rename doc/{00_assets => assets}/fahrzeugpositionsberechnung.png (100%) rename doc/{00_assets => assets}/fahrzeugwinkelberechnung.png (100%) rename doc/{00_assets => assets}/feature_template.png (100%) rename doc/{00_assets => assets}/filter_img/avg_10_w_0_500.png (100%) rename doc/{00_assets => assets}/filter_img/avg_10_w_0_750.png (100%) rename doc/{00_assets => assets}/filter_img/avg_10_w_1_000.png (100%) rename doc/{00_assets => assets}/filter_img/avg_1_w_0_500.png (100%) rename doc/{00_assets => assets}/filter_img/avg_1_w_0_750.png (100%) rename doc/{00_assets => assets}/filter_img/avg_1_w_1_000.png (100%) rename doc/{00_assets => assets}/filter_img/avg_20_w_0_750.png (100%) rename doc/{00_assets => assets}/filter_img/avg_7_w_0_500.png (100%) rename doc/{00_assets => assets}/filter_img/avg_7_w_0_750.png (100%) rename doc/{00_assets => assets}/filter_img/avg_7_w_1_000.png (100%) rename doc/{00_assets => assets}/filter_img/rolling_avg_1.png (100%) rename doc/{00_assets => assets}/filter_img/rolling_avg_10.png (100%) rename doc/{00_assets => assets}/filter_img/rolling_avg_20.png (100%) rename doc/{00_assets => assets}/filter_img/rolling_avg_5.png (100%) rename doc/{00_assets => assets}/gdrive-paf.png (100%) rename doc/{00_assets => assets}/gdrive-permissions.png (100%) rename doc/{00_assets => assets}/gewinnerteam19-architektur.png (100%) rename doc/{00_assets => assets}/git-flow.svg (100%) rename doc/{00_assets => assets}/github-action-md.png (100%) rename doc/{00_assets => assets}/github-action-py.png (100%) rename doc/{00_assets => assets}/github_create_a_branch.png (100%) rename doc/{00_assets => assets}/global_trajectory.png (100%) rename doc/{00_assets => assets}/gnss_ohne_rolling_average.png (100%) rename doc/{00_assets => assets}/implementation_plan_perception.jpg (100%) rename doc/{00_assets => assets}/intersection.png (100%) rename doc/{00_assets => assets}/intersection_2.png (100%) rename doc/{00_assets => assets}/issue_wizard.png (100%) rename doc/{00_assets => assets}/junction.png (100%) rename doc/{00_assets => assets}/kollisionsberechnung.png (100%) rename doc/{00_assets => assets}/kreuzungszonen.png (100%) rename doc/{00_assets => assets}/lane_midpoint.png (100%) rename doc/{00_assets => assets}/leaderboard-1.png (100%) rename doc/{00_assets => assets}/leaderboard-2.png (100%) rename doc/{00_assets => assets}/legend_bt.png (100%) rename doc/{00_assets => assets}/lidar_filter.png (100%) rename doc/{00_assets => assets}/lidarhinderniserkennung.png (100%) rename doc/{00_assets => assets}/local_trajectory.png (100%) rename doc/{00_assets => assets}/multi_lane.png (100%) rename doc/{00_assets => assets}/nvcc_version.png (100%) rename doc/{00_assets => assets}/occupancygrid.png (100%) rename doc/{00_assets => assets}/optimierungsvisualisierung.png (100%) rename doc/{00_assets => assets}/overtaking_overview.png (100%) rename doc/{00_assets => assets}/overview.jpg (100%) rename doc/{00_assets => assets}/perception/adding_new_position_methods.png (100%) rename doc/{00_assets => assets}/perception/data_26_MAE_Boxed.png (100%) rename doc/{00_assets => assets}/perception/data_26_MSE_Boxed.png (100%) rename doc/{00_assets => assets}/perception/kalman_installation_guide.png (100%) rename doc/{00_assets => assets}/perception/modular_subscriber_example.png (100%) rename doc/{00_assets => assets}/perception/new_heading_pub_example.png (100%) rename doc/{00_assets => assets}/perception/non_linear_kalman_example.png (100%) rename doc/{00_assets => assets}/perception/quat_to_angle.png (100%) rename doc/{00_assets => assets}/perception/sensor_debug_change.png (100%) rename doc/{00_assets => assets}/perception/sensor_debug_data_saving.png (100%) rename doc/{00_assets => assets}/perception/sensor_debug_viz_config.png (100%) rename "doc/00_assets/planning \303\274bersicht.png" => "doc/assets/planning \303\274bersicht.png" (100%) rename doc/{00_assets => assets}/planning/BT_paper.png (100%) rename doc/{00_assets => assets}/planning/BehaviorTree_medium.png (100%) rename doc/{00_assets => assets}/planning/Globalplan.png (100%) rename doc/{00_assets => assets}/planning/Overtake_car_trajectory.png (100%) rename doc/{00_assets => assets}/planning/Planning.png (100%) rename doc/{00_assets => assets}/planning/Planning_architecture.png (100%) rename doc/{00_assets => assets}/planning/Planning_paf21.png (100%) rename doc/{00_assets => assets}/planning/collision_check.png (100%) rename doc/{00_assets => assets}/planning/intersection_scenario.png (100%) rename doc/{00_assets => assets}/planning/localplan.png (100%) rename doc/{00_assets => assets}/planning/overtaking_scenario.png (100%) rename doc/{00_assets => assets}/planning/overview.jpg (100%) rename doc/{00_assets => assets}/planning/overview.png (100%) rename doc/{00_assets => assets}/planning/overview_paper1.png (100%) rename doc/{00_assets => assets}/planning/plot_full_trajectory_1_degree.png (100%) rename doc/{00_assets => assets}/planning/prios.png (100%) rename doc/{00_assets => assets}/planning/simple_final_tree.png (100%) rename doc/{00_assets => assets}/planning/test_frenet_results.png (100%) rename doc/{00_assets => assets}/planning/three_scenarios.png (100%) rename doc/{00_assets => assets}/planning/vector_calculation.png (100%) rename doc/{00_assets => assets}/planning/vision_objects_filter_cc.png (100%) rename doc/{00_assets => assets}/positionsvektor.png (100%) rename doc/{00_assets => assets}/preplanning_start.png (100%) rename doc/{00_assets => assets}/pytree_PAF_status.drawio.png (100%) rename doc/{00_assets => assets}/reference.png (100%) rename doc/{00_assets => assets}/reference_xodr.png (100%) rename doc/{00_assets => assets}/research_assets/bicyclegeometry.png (100%) rename doc/{00_assets => assets}/research_assets/chattering.gif (100%) rename doc/{00_assets => assets}/research_assets/curve_detection_paf21_1.png (100%) rename doc/{00_assets => assets}/research_assets/danglingcarrotgeometry.png (100%) rename doc/{00_assets => assets}/research_assets/messages_paf21_2.png (100%) rename doc/{00_assets => assets}/research_assets/mpc.png (100%) rename doc/{00_assets => assets}/research_assets/pure_pursuit.png (100%) rename doc/{00_assets => assets}/research_assets/standard_routine_paf21_2.png (100%) rename doc/{00_assets => assets}/research_assets/stanley_controller.png (100%) rename doc/{00_assets => assets}/research_assets/stanley_paf21_1.png (100%) rename doc/{00_assets => assets}/research_assets/stanleyerror.png (100%) rename doc/{00_assets => assets}/road_option.png (100%) rename doc/{00_assets => assets}/road_options_concept.png (100%) rename doc/{00_assets => assets}/roads_vis.png (100%) rename doc/{00_assets => assets}/segmentation.png (100%) rename doc/{00_assets => assets}/sensoranordnung.png (100%) rename doc/{00_assets => assets}/statemachines.png (100%) rename doc/{00_assets => assets}/top-level.png (100%) rename doc/{00_assets => assets}/trajectory_roads.png (100%) rename doc/{00_assets => assets}/trajekorienfehlermin.png (100%) rename doc/{00_assets => assets}/trajektorienberechnung.png (100%) rename doc/{00_assets => assets}/vulkan_device_not_available.png (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_1.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_2.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_3.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_4.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_5.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_6.md (100%) rename doc/{08_dev_talks => dev_talks}/paf23/sprint_7.md (100%) rename doc/{08_dev_talks => dev_talks}/paf24/mermaid_paf24.md (100%) rename doc/{08_dev_talks => dev_talks}/paf24/student_roles24.md (100%) rename doc/{02_development => development}/Readme.md (67%) rename doc/{02_development/10_build_action.md => development/build_action.md} (100%) rename doc/{02_development/04_coding_style.md => development/coding_style.md} (100%) rename doc/{02_development/12_discord_webhook.md => development/discord_webhook.md} (100%) rename doc/{02_development/14_distributed_simulation.md => development/distributed_simulation.md} (100%) rename doc/{02_development/13_documentation_requirements.md => development/documentation_requirements.md} (95%) rename doc/{02_development/11_dvc.md => development/dvc.md} (96%) rename doc/{02_development/05_git_workflow.md => development/git_workflow.md} (85%) rename doc/{02_development => development}/installing_cuda.md (97%) rename doc/{02_development/10_installing_python_packages.md => development/installing_python_packages.md} (100%) rename doc/{02_development/09_linter_action.md => development/linter_action.md} (93%) rename doc/{02_development/02_linting.md => development/linting.md} (100%) rename doc/{02_development/08_project_management.md => development/project_management.md} (92%) rename doc/{02_development/07_review_guideline.md => development/review_guideline.md} (92%) rename doc/{02_development => development}/templates/template_class.py (100%) rename doc/{02_development => development}/templates/template_class_no_comments.py (100%) rename doc/{02_development => development}/templates/template_component_readme.md (100%) rename doc/{02_development => development}/templates/template_wiki_page.md (100%) rename doc/{02_development => development}/templates/template_wiki_page_empty.md (100%) rename doc/{04_examples => examples}/dvc_example/.gitignore (100%) rename doc/{04_examples => examples}/dvc_example/dataset.dvc (100%) rename doc/{04_examples => examples}/gps_example/gps_signal_example.md (86%) rename doc/{01_general => general}/Readme.md (50%) rename doc/{01_general/04_architecture.md => general/architecture.md} (93%) rename doc/{01_general/02_installation.md => general/installation.md} (97%) create mode 100644 doc/perception/Readme.md rename doc/{06_perception/00_coordinate_transformation.md => perception/coordinate_transformation.md} (92%) rename doc/{06_perception/01_dataset_generator.md => perception/dataset_generator.md} (100%) rename doc/{06_perception/02_dataset_structure.md => perception/dataset_structure.md} (97%) rename doc/{06_perception/10_distance_to_objects.md => perception/distance_to_objects.md} (92%) rename doc/{06_perception/04_efficientps.md => perception/efficientps.md} (94%) rename doc/{06_perception => perception}/experiments/README.md (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/README.md (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1600_lanes.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1600_lanes_mask.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1619_lanes.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1619_lanes_mask.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1660_lanes.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1660_lanes_mask.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1663_lanes.jpg (100%) rename doc/{06_perception => perception}/experiments/lanenet_evaluation/assets/1663_lanes_mask.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/README.md (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_PT_fasterrcnn_resnet50_fpn_v2.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_TF_faster-rcnn.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_nas_l.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_rtdetr_x.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/globals.py (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/pt.py (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/pylot.py (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/requirements.txt (100%) rename doc/{06_perception => perception}/experiments/object-detection-model_evaluation/yolo.py (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/README.md (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/back_1.png (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/back_14.jpg (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/green_22.jpg (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/green_4.png (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/red_10.png (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/red_20.png (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/yellow_1.png (100%) rename doc/{06_perception => perception}/experiments/traffic-light-detection_evaluation/assets/yellow_18.jpg (100%) rename doc/{06_perception/08_kalman_filter.md => perception/kalman_filter.md} (97%) rename doc/{06_perception/03_lidar_distance_utility.md => perception/lidar_distance_utility.md} (98%) rename doc/{06_perception/07_position_heading_filter_debug_node.md => perception/position_heading_filter_debug_node.md} (92%) rename doc/{06_perception/09_position_heading_publisher_node.md => perception/position_heading_publisher_node.md} (89%) rename doc/{06_perception/11_traffic_light_detection.md => perception/traffic_light_detection.md} (100%) rename doc/{06_perception/06_vision_node.md => perception/vision_node.md} (75%) rename doc/{07_planning => planning}/ACC.md (100%) rename doc/{07_planning => planning}/Behavior_tree.md (96%) rename doc/{07_planning => planning}/Collision_Check.md (100%) rename doc/{07_planning => planning}/Global_Planner.md (100%) rename doc/{07_planning => planning}/Local_Planning.md (93%) rename doc/{07_planning => planning}/Preplanning.md (96%) rename doc/{07_planning => planning}/README.md (92%) rename doc/{07_planning => planning}/Unstuck_Behavior.md (91%) rename doc/{07_planning => planning}/motion_planning.md (100%) rename doc/{07_planning/01_py_trees.md => planning/py_trees.md} (97%) rename doc/{03_research => research}/Leaderboard-2/changes_leaderboard2.md (96%) create mode 100644 doc/research/Readme.md create mode 100644 doc/research/acting/Readme.md rename doc/{03_research/01_acting/05_autoware_acting.md => research/acting/autoware_acting.md} (100%) rename doc/{03_research/01_acting/01_basics_acting.md => research/acting/basics_acting.md} (96%) rename doc/{03_research/01_acting/02_implementation_acting.md => research/acting/implementation_acting.md} (96%) rename doc/{03_research/01_acting/03_paf21_1_acting.md => research/acting/paf21_1_acting.md} (87%) rename doc/{03_research/01_acting/04_paf21_2_and_pylot_acting.md => research/acting/paf21_2_and_pylot_acting.md} (97%) rename doc/{03_research/02_perception => research/perception}/LIDAR_data.md (96%) create mode 100644 doc/research/perception/Readme.md rename doc/{03_research/02_perception/05_Research_PAF21-Perception.md => research/perception/Research_PAF21-Perception.md} (100%) rename doc/{03_research/02_perception/05-autoware-perception.md => research/perception/autoware-perception.md} (100%) rename doc/{03_research/02_perception/02_basics.md => research/perception/basics.md} (100%) rename doc/{03_research/02_perception/03_first_implementation_plan.md => research/perception/first_implementation_plan.md} (97%) rename doc/{03_research/02_perception/06_paf_21_1_perception.md => research/perception/paf_21_1_perception.md} (100%) rename doc/{03_research/02_perception/04_pylot.md => research/perception/pylot.md} (100%) rename doc/{03_research/03_planning => research/planning}/Readme.md (81%) rename doc/{03_research/03_planning/00_paf22/03_Implementation.md => research/planning/paf22/Implementation.md} (94%) rename doc/{03_research/03_planning/00_paf22/05_Navigation_Data.md => research/planning/paf22/Navigation_Data.md} (100%) rename doc/{03_research/03_planning/00_paf22/07_OpenDrive.md => research/planning/paf22/OpenDrive.md} (97%) rename doc/{03_research/03_planning/00_paf22/02_basics.md => research/planning/paf22/basics.md} (93%) rename doc/{03_research/03_planning/00_paf22/04_decision_making.md => research/planning/paf22/decision_making.md} (100%) rename doc/{03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md => research/planning/paf22/reevaluation_desicion_making.md} (100%) rename doc/{03_research/03_planning/00_paf22/06_state_machine_design.md => research/planning/paf22/state_machine_design.md} (97%) rename doc/{03_research/03_planning/00_paf23/04_Local_planning_for_first_milestone.md => research/planning/paf23/Local_planning_for_first_milestone.md} (82%) rename doc/{03_research/03_planning/00_paf23/03_PlannedArchitecture.md => research/planning/paf23/PlannedArchitecture.md} (91%) rename doc/{03_research/03_planning/00_paf23/01_Planning.md => research/planning/paf23/Planning.md} (95%) rename doc/{03_research/03_planning/00_paf23/02_PlanningPaf22.md => research/planning/paf23/PlanningPaf22.md} (92%) rename doc/{03_research/03_planning/00_paf23/09_Research_Pylot_Planning.md => research/planning/paf23/Research_Pylot_Planning.md} (100%) rename doc/{03_research/03_planning/00_paf23 => research/planning/paf23}/Testing_frenet_trajectory_planner.md (97%) rename doc/{03_research/03_planning/00_paf23/08_paf21-1.md => research/planning/paf23/paf21-1.md} (93%) rename doc/{03_research/03_planning/00_paf23 => research/planning/paf23}/test_traj.py (100%) create mode 100644 doc/research/requirements/Readme.md rename doc/{03_research/04_requirements/02_informations_from_leaderboard.md => research/requirements/informations_from_leaderboard.md} (100%) rename doc/{03_research/04_requirements/03_requirements.md => research/requirements/requirements.md} (89%) rename doc/{03_research/04_requirements/04_use_cases.md => research/requirements/use_cases.md} (97%) diff --git a/.flake8 b/.flake8 index 6cce5018..042f2345 100644 --- a/.flake8 +++ b/.flake8 @@ -3,5 +3,5 @@ exclude= code/planning/src/behavior_agent/behavior_tree.py, code/planning/src/behavior_agent/behaviours/__init__.py, code/planning/src/behavior_agent/behaviours, code/planning/__init__.py, - doc/02_development/templates/template_class_no_comments.py, - doc/02_development/templates/template_class.py \ No newline at end of file + doc/development/templates/template_class_no_comments.py, + doc/development/templates/template_class.py \ No newline at end of file diff --git a/README.md b/README.md index 48c48f93..03c77c38 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Praktikum Autonomes Fahren - PAF -This repository contains the source code for the "Praktikum Autonomes Fahren" at the Chair of Mechatronics from the University of Augsburg in the winter semester of 2023/2024. +This repository contains the source code for the "Praktikum Autonomes Fahren" at the Chair of Mechatronics from the University of Augsburg. The goal of the project is to develop a self-driving car that can navigate through a simulated environment. The project is based on the [CARLA simulator](https://carla.org/) and uses the [ROS](https://www.ros.org/) framework for communication between the different components. In the future, the project aims to contribute to the [CARLA Autonomous Driving Challenge](https://leaderboard.carla.org/challenge/). @@ -27,12 +27,12 @@ To run the project you have to install [docker](https://docs.docker.com/engine/i [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). `docker` and `nvidia-docker` are used to run the project in a containerized environment with GPU support. -More detailed instructions about setup and execution can be found [here](./doc/01_general/Readme.md). +More detailed instructions about setup and execution can be found [here](./doc/general/Readme.md). ## Development -If you contribute to this project please read the guidelines first. They can be found [here](./doc/02_development/Readme.md). +If you contribute to this project please read the guidelines first. They can be found [here](./doc/development/Readme.md). ## Research -The research on existing projects we did can be found [here](./doc/03_research/Readme.md). +The research on existing projects we did can be found [here](./doc/research/Readme.md). diff --git a/build/README.md b/build/README.md index 6a4e7861..1f3ddeca 100644 --- a/build/README.md +++ b/build/README.md @@ -145,7 +145,7 @@ Distributed execution separates the agent and the CARLA simulator onto different - Running large vision models that require extensive VRAM. - The single machine's resources are insufficient to handle both the agent and simulator. -**Note**: In distributed execution, the CARLA simulator must be running on a second desktop PC, and the `CARLA_SIM_HOST` environment variable should be set accordingly. Further information can be found in [here](../doc/02_development/14_distributed_simulation.md). +**Note**: In distributed execution, the CARLA simulator must be running on a second desktop PC, and the `CARLA_SIM_HOST` environment variable should be set accordingly. Further information can be found in [here](../doc/development/distributed_simulation.md). ## Usage diff --git a/code/acting/readme.md b/code/acting/readme.md index abc85d83..39c73fbc 100644 --- a/code/acting/readme.md +++ b/code/acting/readme.md @@ -28,22 +28,22 @@ Alexander Hellmann ## Acting Documentation -In order to further understand the general idea of the taken approach to the acting component please refer to the documentation of the [research](../../doc/03_research/01_acting/Readme.md) done and see the planned [general definition](../../doc/01_general/04_architecture.md#acting). +In order to further understand the general idea of the taken approach to the acting component please refer to the documentation of the [research](../../doc/research/acting/Readme.md) done and see the planned [general definition](../../doc/general/architecture.md#acting). -It is also highly recommended to go through the indepth [Acting-Documentation](../../doc/05_acting/Readme.md)! +It is also highly recommended to go through the indepth [Acting-Documentation](../../doc/acting/Readme.md)! ## Test/Debug/Tune Acting-Components The Acting_Debug_Node can be used as a simulated Planning package, publishing adjustable target velocities, steerings and trajectories as needed. -For more information about this node and how to use it, please read the [documentation](../../doc/05_acting/05_acting_testing.md). +For more information about this node and how to use it, please read the [documentation](../../doc/acting/acting_testing.md). You can also find more information in the commented [code](./src/acting/Acting_Debug_Node.py). ## Longitudinal controllers (Velocity Controller) The longitudinal controller is implemented as a PID velocity controller. -For more information about this controller, either read the [documentation](../../doc/05_acting/02_velocity_controller.md) or go through the commented [code](./src/acting/velocity_controller.py). +For more information about this controller, either read the [documentation](../../doc/acting/velocity_controller.md) or go through the commented [code](./src/acting/velocity_controller.py). ## Lateral controllers (Steering Controllers) @@ -52,7 +52,7 @@ There are two steering controllers currently implemented, both providing live te - Pure Persuit Controller (paf/hero/pure_p_debug) - Stanley Controller (paf/hero/stanley_debug) -For further information about the steering controllers, either read the [documentation](./../../doc/05_acting/03_steering_controllers.md) or go through the commented code of [stanley_controller](./src/acting/stanley_controller.py) or [purepursuit_controller](./src/acting/pure_pursuit_controller.py). +For further information about the steering controllers, either read the [documentation](./../../doc/acting/steering_controllers.md) or go through the commented code of [stanley_controller](./src/acting/stanley_controller.py) or [purepursuit_controller](./src/acting/pure_pursuit_controller.py). ## Vehicle controller @@ -60,8 +60,8 @@ The VehicleController collects all necessary msgs from the other controllers and It also executes emergency-brakes and the unstuck-routine, if detected. -For more information about this controller, either read the [documentation](../../doc/05_acting/04_vehicle_controller.md) or go through the commented [code](./src/acting/vehicle_controller.py). +For more information about this controller, either read the [documentation](../../doc/acting/vehicle_controller.md) or go through the commented [code](./src/acting/vehicle_controller.py). ## Visualization of the HeroFrame in rviz -For information about vizualizing the upcomming path in rviz see [Main frame publisher](../../doc/05_acting/06_main_frame_publisher.md) +For information about vizualizing the upcomming path in rviz see [Main frame publisher](../../doc/acting/main_frame_publisher.md) diff --git a/code/perception/src/coordinate_transformation.py b/code/perception/src/coordinate_transformation.py index fd570e0a..4f062770 100755 --- a/code/perception/src/coordinate_transformation.py +++ b/code/perception/src/coordinate_transformation.py @@ -118,7 +118,7 @@ def ecef_to_enu(x, y, z, lat0, lon0, h0): def quat_to_heading(quaternion): """ Converts a quaternion to a heading of the car in radians - (see ../../doc/06_perception/00_coordinate_transformation.md) + (see ../../doc/perception/coordinate_transformation.md) :param quaternion: quaternion of the car as a list [q.x, q.y, q.z, q.w] where q is the quaternion :return: heading of the car in radians (float) diff --git a/code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_MAE_Boxed.png b/code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_MAE_Boxed.png similarity index 100% rename from code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_MAE_Boxed.png rename to code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_MAE_Boxed.png diff --git a/code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_MSE_Boxed.png b/code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_MSE_Boxed.png similarity index 100% rename from code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_MSE_Boxed.png rename to code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_MSE_Boxed.png diff --git a/code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_RMSPE_Boxed.png b/code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_RMSPE_Boxed.png similarity index 100% rename from code/perception/src/00_Experiments/Position_Heading_Datasets/plots/data_26_RMSPE_Boxed.png rename to code/perception/src/experiments/Position_Heading_Datasets/plots/data_26_RMSPE_Boxed.png diff --git a/code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py b/code/perception/src/experiments/Position_Heading_Datasets/viz.py similarity index 99% rename from code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py rename to code/perception/src/experiments/Position_Heading_Datasets/viz.py index 979ff345..bee260b8 100644 --- a/code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py +++ b/code/perception/src/experiments/Position_Heading_Datasets/viz.py @@ -4,7 +4,7 @@ """ The documentation on how to use this file can be found in -docs/perception/07_perception_heading_filter_debug_node.md +docs/perception/perception_heading_filter_debug_node.md since it is used to visualize the data of the heading filter debug node. """ diff --git a/code/perception/src/kalman_filter.py b/code/perception/src/kalman_filter.py index 6fa537cf..6a44bc22 100755 --- a/code/perception/src/kalman_filter.py +++ b/code/perception/src/kalman_filter.py @@ -18,7 +18,7 @@ ''' For more information see the documentation in: -../../doc/06_perception/08_kalman_filter.md +../../doc/perception/kalman_filter.md This class implements a Kalman filter for a 3D object tracked in 2D space. It implements the data of the IMU and the GPS Sensors. @@ -75,7 +75,7 @@ class KalmanFilter(CompatibleNode): This class implements a Kalman filter for the Heading and Position of the car. For more information see the documentation in: - ../../doc/06_perception/08_kalman_filter.md + ../../doc/perception/kalman_filter.md """ def __init__(self): """ diff --git a/code/perception/src/lidar_distance.py b/code/perception/src/lidar_distance.py index 16fdc4ac..f5f7c964 100755 --- a/code/perception/src/lidar_distance.py +++ b/code/perception/src/lidar_distance.py @@ -12,7 +12,7 @@ class LidarDistance(): - """ See doc/06_perception/03_lidar_distance_utility.md on + """ See doc/perception/lidar_distance_utility.md on how to configute this node """ diff --git a/code/perception/src/position_heading_filter_debug_node.py b/code/perception/src/position_heading_filter_debug_node.py index f43763e1..25d4c901 100755 --- a/code/perception/src/position_heading_filter_debug_node.py +++ b/code/perception/src/position_heading_filter_debug_node.py @@ -189,7 +189,7 @@ def save_position_data(self): """ This method saves the current location errors in a csv file. in the folders of - paf/doc/06_perception/00_Experiments/kalman_datasets + paf/doc/perception/experiments/kalman_datasets It does this for a limited amount of time. """ # stop saving data when max is reached @@ -199,7 +199,7 @@ def save_position_data(self): # Specify the path to the folder where you want to save the data base_path = ('/workspace/code/perception/' - 'src/00_Experiments/' + FOLDER_PATH) + 'src/experiments/' + FOLDER_PATH) folder_path_x = base_path + '/x_error' folder_path_y = base_path + '/y_error' # Ensure the directories exist @@ -222,7 +222,7 @@ def save_heading_data(self): """ This method saves the current heading errors in a csv file. in the folders of - paf/doc/06_perception/00_Experiments/kalman_datasets + paf/doc/perception/experiments/kalman_datasets It does this for a limited amount of time. """ # if rospy.get_time() > 45 stop saving data: @@ -232,7 +232,7 @@ def save_heading_data(self): # Specify the path to the folder where you want to save the data base_path = ('/workspace/code/perception/' - 'src/00_Experiments' + FOLDER_PATH) + 'src/experiments' + FOLDER_PATH) folder_path_heading = base_path + '/heading_error' # Ensure the directories exist diff --git a/code/perception/src/position_heading_publisher_node.py b/code/perception/src/position_heading_publisher_node.py index 832121a8..f5b62e72 100755 --- a/code/perception/src/position_heading_publisher_node.py +++ b/code/perception/src/position_heading_publisher_node.py @@ -36,7 +36,7 @@ class PositionHeadingPublisherNode(CompatibleNode): must be added in the constructor for clean modular programming! For more information: - ../../doc/06_perception/09_position_heading_publisher_node.md + ../../doc/perception/position_heading_publisher_node.md """ def __init__(self): diff --git a/code/perception/src/traffic_light_detection/Readme.md b/code/perception/src/traffic_light_detection/Readme.md index a9641884..efea630c 100644 --- a/code/perception/src/traffic_light_detection/Readme.md +++ b/code/perception/src/traffic_light_detection/Readme.md @@ -1,7 +1,7 @@ # Traffic light detection The training is configured as DVC experiment. -More details about dvc experiments can be found [here](../../../../doc/02_development/11_dvc.md). +More details about dvc experiments can be found [here](../../../../doc/development/dvc.md). ## Training diff --git a/code/planning/src/behavior_agent/behaviours/maneuvers.py b/code/planning/src/behavior_agent/behaviours/maneuvers.py index 64e3ab85..2a37f7a1 100755 --- a/code/planning/src/behavior_agent/behaviours/maneuvers.py +++ b/code/planning/src/behavior_agent/behaviours/maneuvers.py @@ -405,7 +405,7 @@ class UnstuckRoutine(py_trees.behaviour.Behaviour): """ Documentation to this behavior can be found in - /doc/07_planning/Behavior_detailed.md + /doc/planning/Behavior_detailed.md This behavior is triggered when the vehicle is stuck and needs to be unstuck. The behavior will then try to reverse and steer to the left or diff --git a/doc/03_research/01_acting/Readme.md b/doc/03_research/01_acting/Readme.md deleted file mode 100644 index 5bc58da5..00000000 --- a/doc/03_research/01_acting/Readme.md +++ /dev/null @@ -1,11 +0,0 @@ -# Acting - -This folder contains all the results of our research on acting: - -- **PAF22** -- [Basics](./01_basics_acting.md) -- [Implementation](./02_implementation_acting.md) -- **PAF23** -- [PAF21_1 Acting](./03_paf21_1_acting.md) -- [PAF21_2 Acting & Pylot Control](./04_paf21_2_and_pylot_acting.md) -- [Autoware Control](./05_autoware_acting.md) diff --git a/doc/03_research/02_perception/Readme.md b/doc/03_research/02_perception/Readme.md deleted file mode 100644 index 170fe63f..00000000 --- a/doc/03_research/02_perception/Readme.md +++ /dev/null @@ -1,12 +0,0 @@ -# Perception - -This folder contains all the results of research on perception: - -- **PAF22** - - [Basics](./02_basics.md) - - [First implementation plan](./03_first_implementation_plan.md) -- **PAF23** - - [Pylot Perception](./04_pylot.md) - - [PAF_21_2 Perception](./05_Research_PAF21-Perception.md) - - [PAF_21_1_Perception](./06_paf_21_1_perception.md) -- [Autoware Perception](./05-autoware-perception.md) diff --git a/doc/03_research/04_requirements/Readme.md b/doc/03_research/04_requirements/Readme.md deleted file mode 100644 index a2f40164..00000000 --- a/doc/03_research/04_requirements/Readme.md +++ /dev/null @@ -1,7 +0,0 @@ -# Requirements - -This folder contains all the results of our research on requirements: - -- [Leaderboard information](./02_informations_from_leaderboard.md) -- [Reqirements for agent](./03_requirements.md) -- [Use case scenarios](./04_use_cases.md) diff --git a/doc/03_research/Readme.md b/doc/03_research/Readme.md deleted file mode 100644 index 04591a65..00000000 --- a/doc/03_research/Readme.md +++ /dev/null @@ -1,10 +0,0 @@ -# Research - -This folder contains every research we did before we started the project. - -The research is structured in the following folders: - -- [Acting](./01_acting/Readme.md) -- [Perception](./02_perception/Readme.md) -- [Planning](./03_planning/Readme.md) -- [Requirements](./04_requirements/Readme.md) diff --git a/doc/05_acting/Readme.md b/doc/05_acting/Readme.md deleted file mode 100644 index d84fdf21..00000000 --- a/doc/05_acting/Readme.md +++ /dev/null @@ -1,10 +0,0 @@ -# Documentation of acting component - -This folder contains the documentation of the acting component. - -1. [Architecture](./01_architecture_documentation.md) -2. [Overview of the Velocity Controller](./02_velocity_controller.md) -3. [Overview of the Steering Controllers](./03_steering_controllers.md) -4. [Overview of the Vehicle Controller Component](./04_vehicle_controller.md) -5. [How to test/tune acting components independedly](./05_acting_testing.md) -6. [Main frame publisher](./06_mainframe_publisher.md) diff --git a/doc/06_perception/Readme.md b/doc/06_perception/Readme.md deleted file mode 100644 index 56f169a4..00000000 --- a/doc/06_perception/Readme.md +++ /dev/null @@ -1,22 +0,0 @@ -# Documentation of perception component - -This folder contains further documentation of the perception components. - -1. [Vision Node](./06_vision_node.md) - - The Visison Node provides an adaptive interface that is able to perform object-detection and/or image-segmentation on multiple cameras at the same time. -2. [Position Heading Filter Debug Node](./07_position_heading_filter_debug_node.md) -3. [Kalman Filter](./08_kalman_filter.md) -4. [Position Heading Publisher Node](./09_position_heading_publisher_node.md) -5. [Distance to Objects](./10_distance_to_objects.md) -6. [Traffic Light Detection](./11_traffic_light_detection.md) -7. [Coordinate Transformation (helper functions)](./00_coordinate_transformation.md) -8. [Dataset Generator](./01_dataset_generator.md) -9. [Dataset Structure](./02_dataset_structure.md) -10. [Lidar Distance Utility](./03_lidar_distance_utility.md) - 1. not used since paf22 -11. [Efficient PS](./04_efficientps.md) - 1. not used scince paf22 and never successfully tested - -## Experiments - -- The overview of performance evaluations is located in the [experiments](./experiments/README.md) folder. diff --git a/doc/acting/Readme.md b/doc/acting/Readme.md new file mode 100644 index 00000000..f55e66eb --- /dev/null +++ b/doc/acting/Readme.md @@ -0,0 +1,10 @@ +# Documentation of acting component + +This folder contains the documentation of the acting component. + +1. [Architecture](./architecture_documentation.md) +2. [Overview of the Velocity Controller](./velocity_controller.md) +3. [Overview of the Steering Controllers](./steering_controllers.md) +4. [Overview of the Vehicle Controller Component](./vehicle_controller.md) +5. [How to test/tune acting components independedly](./acting_testing.md) +6. [Main frame publisher](./mainframe_publisher.md) diff --git a/doc/05_acting/05_acting_testing.md b/doc/acting/acting_testing.md similarity index 100% rename from doc/05_acting/05_acting_testing.md rename to doc/acting/acting_testing.md diff --git a/doc/05_acting/01_architecture_documentation.md b/doc/acting/architecture_documentation.md similarity index 98% rename from doc/05_acting/01_architecture_documentation.md rename to doc/acting/architecture_documentation.md index 33031b0a..6eefb3cd 100644 --- a/doc/05_acting/01_architecture_documentation.md +++ b/doc/acting/architecture_documentation.md @@ -26,7 +26,7 @@ Alexander Hellmann ## Acting Architecture -![MISSING: Acting-ARCHITECTURE](../00_assets/acting/Architecture_Acting.png) +![MISSING: Acting-ARCHITECTURE](../assets/acting/Architecture_Acting.png) ## Summary of Acting Components diff --git a/doc/05_acting/06_main_frame_publisher.md b/doc/acting/main_frame_publisher.md similarity index 94% rename from doc/05_acting/06_main_frame_publisher.md rename to doc/acting/main_frame_publisher.md index 5999eda3..0a12bfc6 100644 --- a/doc/05_acting/06_main_frame_publisher.md +++ b/doc/acting/main_frame_publisher.md @@ -38,4 +38,4 @@ There are issues if the vehicle drives upwards or downwards. In this case the path will start to rise above the street (see picture) or start to move bellow the street. You can counteract this by changing the z offset of the path in rviz. -![main frame publisher bug](./../00_assets/acting/main_frame_publisher_bug.png) +![main frame publisher bug](./../assets/acting/main_frame_publisher_bug.png) diff --git a/doc/05_acting/03_steering_controllers.md b/doc/acting/steering_controllers.md similarity index 93% rename from doc/05_acting/03_steering_controllers.md rename to doc/acting/steering_controllers.md index 46101b9e..a15a3c02 100644 --- a/doc/05_acting/03_steering_controllers.md +++ b/doc/acting/steering_controllers.md @@ -35,7 +35,7 @@ For more indepth information about the PurePursuit Controller, click [this link] At every moment it checks a point of the trajectory in front of the vehicle with a distance of **$d_{la}$** and determines a steering-angle so that the vehicle will aim straight to this point of the trajectory. -![MISSING: PurePursuit-ShowImage](../00_assets/acting/Steering_PurePursuit.png) +![MISSING: PurePursuit-ShowImage](../assets/acting/Steering_PurePursuit.png) This **look-ahead-distance $d_{la}$** is velocity-dependent, as at higher velocities, the controller should look further ahead onto the trajectory. @@ -46,7 +46,7 @@ $$ \delta = arctan({2 \cdot L_{vehicle} \cdot sin(\alpha) \over d_{la}})$$ To tune the PurePursuit Controller, you can tune the factor of this velocity-dependence **$k_{ld}$**. Also, for an unknown reason, we needed to add an amplification to the output-steering signal before publishing aswell **$k_{pub}$**, which highly optimized the steering performance in the dev-launch: -![MISSING: PurePursuit-Optimization_Image](../00_assets/acting/Steering_PurePursuit_Tuning.png) +![MISSING: PurePursuit-Optimization_Image](../assets/acting/Steering_PurePursuit_Tuning.png) **NOTE:** The **look-ahead-distance $d_{la}$** should be highly optimally tuned already for optimal sensor data and on the dev-launch! In the Leaderboard-Launch this sadly does not work the same, so it requires different tuning and needs to be optimized/fixed. @@ -56,7 +56,7 @@ In the Leaderboard-Launch this sadly does not work the same, so it requires diff The [Stanley Controller's](../../code/acting/src/acting/stanley.py) main features to determine a steering-output is the so-called **cross-track-error** (e_fa in Image) and the **trajectory-heading** (theta_e in Image). For more indepth information about the Stanley Controller, click [this link](https://medium.com/roboquest/understanding-geometric-path-tracking-algorithms-stanley-controller-25da17bcc219) and [this link](https://ai.stanford.edu/~gabeh/papers/hoffmann_stanley_control07.pdf). -![MISSING: Stanley-SHOW-IMAGE](../00_assets/acting/Steering_Stanley.png) +![MISSING: Stanley-SHOW-IMAGE](../assets/acting/Steering_Stanley.png) At every moment it checks the closest point of the trajectory to itself and determines a two steering-angles: @@ -69,7 +69,7 @@ $$ \delta = \theta_e - arctan({k_{ce} \cdot e_{fa} \over v})$$ To tune the Stanley Controller, you tune the factor **$k_{ce}$**, which amplifies (or diminishes) how strong the **cross-track-error**-calculated steering-angle will "flow" into the output steering-angle. -![MISSING: Stanley-Compared to PurePursuit](../00_assets/acting/Steering_Stanley_ComparedToPurePur.png) +![MISSING: Stanley-Compared to PurePursuit](../assets/acting/Steering_Stanley_ComparedToPurePur.png) As for the PurePursuit Controller, sadly the achieved good tuning in the Dev-Launch was by far too strong for the Leaderboard-Launch, which is why we needed to Hotfix the Steering in the last week to Tune Stanley alot "weaker". We do not exactly know, why the two launches are this different. (Dev-Launch and Leaderboard-Launch differentiate in synchronicity, Dev-Launch is synchronous, Leaderboard-Launch is asynchronous?) diff --git a/doc/05_acting/04_vehicle_controller.md b/doc/acting/vehicle_controller.md similarity index 95% rename from doc/05_acting/04_vehicle_controller.md rename to doc/acting/vehicle_controller.md index 1bb45558..b8a2d2a3 100644 --- a/doc/05_acting/04_vehicle_controller.md +++ b/doc/acting/vehicle_controller.md @@ -64,7 +64,7 @@ This is done to prevent firing the emergency brake each time the main loop is re Comparison between normal braking and emergency braking: -![Braking Comparison](/doc/00_assets/acting/emergency_brake_stats_graph.png) +![Braking Comparison](/doc/assets/acting/emergency_brake_stats_graph.png) _Please be aware, that this bug abuse might not work in newer updates!_ @@ -72,7 +72,7 @@ _Please be aware, that this bug abuse might not work in newer updates!_ The Vehicle Controller also reads ```current_behavior```-Messages, published by Planning, currently reacting to the **unstuck-behavior**: -This is done to drive in a specific way whenever we get into a stuck situation and the [Unstuck Behavior](/doc/07_planning/Behavior_detailed.md) is persued. +This is done to drive in a specific way whenever we get into a stuck situation and the [Unstuck Behavior](/doc/planning/Behavior_detailed.md) is persued. Inside the Unstuck Behavior we want drive backwards without steering, which is why it is the only case, where we do not use any of our steering controllers. diff --git a/doc/05_acting/02_velocity_controller.md b/doc/acting/velocity_controller.md similarity index 91% rename from doc/05_acting/02_velocity_controller.md rename to doc/acting/velocity_controller.md index 645cae21..9d8bbe3d 100644 --- a/doc/05_acting/02_velocity_controller.md +++ b/doc/acting/velocity_controller.md @@ -31,18 +31,18 @@ For more information about PID-Controllers and how they work, follow [this link] Currently, we use a tuned PID-Controller which was tuned for the speed of 14 m/s (around 50 km/h), as this is the most commonly driven velocity in this simulation: -![MISSING: PID-TUNING-IMAGE](../00_assets/acting/VelContr_PID_StepResponse.png) +![MISSING: PID-TUNING-IMAGE](../assets/acting/VelContr_PID_StepResponse.png) Be aware, that the CARLA-Vehicle shifts gears automatically, resulting in the bumps you see! As PID-Controllers are linear by nature, the velocity-system is therefore linearized around 50 km/h, meaning the further you deviate from 50 km/h the worse the controller's performance gets: -![MISSING: PID-LINEARIZATION-IMAGE](../00_assets/acting/VelContr_PID_differentVelocities.png) +![MISSING: PID-LINEARIZATION-IMAGE](../assets/acting/VelContr_PID_differentVelocities.png) As the Velocity Controller also has to handle braking, we currently use ```throttle```-optimized PID-Controller to calculate ```brake``` aswell (Since adding another Controller, like a P-Controller, did not work nearly as well!): -![MISSING: PID-BRAKING-IMAGE](../00_assets/acting/VelContr_PID_BrakingWithThrottlePID.png) +![MISSING: PID-BRAKING-IMAGE](../assets/acting/VelContr_PID_BrakingWithThrottlePID.png) -Currently, there is no general backwards-driving implemented here, as this was not needed (other than the [Unstuck Routine](/doc/07_planning/Behavior_detailed.md)). +Currently, there is no general backwards-driving implemented here, as this was not needed (other than the [Unstuck Routine](/doc/planning/Behavior_detailed.md)). Negative ```target_velocity``` signals are currently taken care off by braking until we stand still. The ONLY exception is a ```target_velocity``` of **-3!**! diff --git a/doc/00_assets/2_15_layover.png b/doc/assets/2_15_layover.png similarity index 100% rename from doc/00_assets/2_15_layover.png rename to doc/assets/2_15_layover.png diff --git a/doc/00_assets/2_layover.png b/doc/assets/2_layover.png similarity index 100% rename from doc/00_assets/2_layover.png rename to doc/assets/2_layover.png diff --git a/doc/00_assets/3_layover.png b/doc/assets/3_layover.png similarity index 100% rename from doc/00_assets/3_layover.png rename to doc/assets/3_layover.png diff --git a/doc/00_assets/3d_2d_formula.png b/doc/assets/3d_2d_formula.png similarity index 100% rename from doc/00_assets/3d_2d_formula.png rename to doc/assets/3d_2d_formula.png diff --git a/doc/00_assets/3d_2d_projection.png b/doc/assets/3d_2d_projection.png similarity index 100% rename from doc/00_assets/3d_2d_projection.png rename to doc/assets/3d_2d_projection.png diff --git a/doc/00_assets/4_layover.png b/doc/assets/4_layover.png similarity index 100% rename from doc/00_assets/4_layover.png rename to doc/assets/4_layover.png diff --git a/doc/00_assets/Back_Detection.png b/doc/assets/Back_Detection.png similarity index 100% rename from doc/00_assets/Back_Detection.png rename to doc/assets/Back_Detection.png diff --git a/doc/00_assets/Comment_PR.png b/doc/assets/Comment_PR.png similarity index 100% rename from doc/00_assets/Comment_PR.png rename to doc/assets/Comment_PR.png diff --git a/doc/00_assets/Comment_viewed.png b/doc/assets/Comment_viewed.png similarity index 100% rename from doc/00_assets/Comment_viewed.png rename to doc/assets/Comment_viewed.png diff --git a/doc/00_assets/Commit_suggestion.png b/doc/assets/Commit_suggestion.png similarity index 100% rename from doc/00_assets/Commit_suggestion.png rename to doc/assets/Commit_suggestion.png diff --git a/doc/00_assets/Driving_SM.png b/doc/assets/Driving_SM.png similarity index 100% rename from doc/00_assets/Driving_SM.png rename to doc/assets/Driving_SM.png diff --git a/doc/00_assets/Files_Changed.png b/doc/assets/Files_Changed.png similarity index 100% rename from doc/00_assets/Files_Changed.png rename to doc/assets/Files_Changed.png diff --git a/doc/00_assets/Front_Detection.png b/doc/assets/Front_Detection.png similarity index 100% rename from doc/00_assets/Front_Detection.png rename to doc/assets/Front_Detection.png diff --git a/doc/00_assets/Global_Plan.png b/doc/assets/Global_Plan.png similarity index 100% rename from doc/00_assets/Global_Plan.png rename to doc/assets/Global_Plan.png diff --git a/doc/00_assets/Intersection_SM.png b/doc/assets/Intersection_SM.png similarity index 100% rename from doc/00_assets/Intersection_SM.png rename to doc/assets/Intersection_SM.png diff --git a/doc/00_assets/Lane_Change_SM.png b/doc/assets/Lane_Change_SM.png similarity index 100% rename from doc/00_assets/Lane_Change_SM.png rename to doc/assets/Lane_Change_SM.png diff --git a/doc/00_assets/Lanelets.png b/doc/assets/Lanelets.png similarity index 100% rename from doc/00_assets/Lanelets.png rename to doc/assets/Lanelets.png diff --git a/doc/00_assets/Left_Detection.png b/doc/assets/Left_Detection.png similarity index 100% rename from doc/00_assets/Left_Detection.png rename to doc/assets/Left_Detection.png diff --git a/doc/00_assets/PR_overview.png b/doc/assets/PR_overview.png similarity index 100% rename from doc/00_assets/PR_overview.png rename to doc/assets/PR_overview.png diff --git a/doc/00_assets/Planning_Implementierung.png b/doc/assets/Planning_Implementierung.png similarity index 100% rename from doc/00_assets/Planning_Implementierung.png rename to doc/assets/Planning_Implementierung.png diff --git a/doc/00_assets/Pycharm_PR.png b/doc/assets/Pycharm_PR.png similarity index 100% rename from doc/00_assets/Pycharm_PR.png rename to doc/assets/Pycharm_PR.png diff --git a/doc/00_assets/Resolve_conversation.png b/doc/assets/Resolve_conversation.png similarity index 100% rename from doc/00_assets/Resolve_conversation.png rename to doc/assets/Resolve_conversation.png diff --git a/doc/00_assets/Review_changes.png b/doc/assets/Review_changes.png similarity index 100% rename from doc/00_assets/Review_changes.png rename to doc/assets/Review_changes.png diff --git a/doc/00_assets/Right_Detection.png b/doc/assets/Right_Detection.png similarity index 100% rename from doc/00_assets/Right_Detection.png rename to doc/assets/Right_Detection.png diff --git a/doc/00_assets/Right_lane.png b/doc/assets/Right_lane.png similarity index 100% rename from doc/00_assets/Right_lane.png rename to doc/assets/Right_lane.png diff --git a/doc/00_assets/Road0_cutout.png b/doc/assets/Road0_cutout.png similarity index 100% rename from doc/00_assets/Road0_cutout.png rename to doc/assets/Road0_cutout.png diff --git a/doc/00_assets/Stop_sign_OpenDrive.png b/doc/assets/Stop_sign_OpenDrive.png similarity index 100% rename from doc/00_assets/Stop_sign_OpenDrive.png rename to doc/assets/Stop_sign_OpenDrive.png diff --git a/doc/00_assets/Suggestion.png b/doc/assets/Suggestion.png similarity index 100% rename from doc/00_assets/Suggestion.png rename to doc/assets/Suggestion.png diff --git a/doc/00_assets/Super_SM.png b/doc/assets/Super_SM.png similarity index 100% rename from doc/00_assets/Super_SM.png rename to doc/assets/Super_SM.png diff --git a/doc/00_assets/TR01.png b/doc/assets/TR01.png similarity index 100% rename from doc/00_assets/TR01.png rename to doc/assets/TR01.png diff --git a/doc/00_assets/TR02.png b/doc/assets/TR02.png similarity index 100% rename from doc/00_assets/TR02.png rename to doc/assets/TR02.png diff --git a/doc/00_assets/TR03.png b/doc/assets/TR03.png similarity index 100% rename from doc/00_assets/TR03.png rename to doc/assets/TR03.png diff --git a/doc/00_assets/TR04.png b/doc/assets/TR04.png similarity index 100% rename from doc/00_assets/TR04.png rename to doc/assets/TR04.png diff --git a/doc/00_assets/TR05.png b/doc/assets/TR05.png similarity index 100% rename from doc/00_assets/TR05.png rename to doc/assets/TR05.png diff --git a/doc/00_assets/TR06.png b/doc/assets/TR06.png similarity index 100% rename from doc/00_assets/TR06.png rename to doc/assets/TR06.png diff --git a/doc/00_assets/TR07.png b/doc/assets/TR07.png similarity index 100% rename from doc/00_assets/TR07.png rename to doc/assets/TR07.png diff --git a/doc/00_assets/TR08.png b/doc/assets/TR08.png similarity index 100% rename from doc/00_assets/TR08.png rename to doc/assets/TR08.png diff --git a/doc/00_assets/TR09.png b/doc/assets/TR09.png similarity index 100% rename from doc/00_assets/TR09.png rename to doc/assets/TR09.png diff --git a/doc/00_assets/TR10.png b/doc/assets/TR10.png similarity index 100% rename from doc/00_assets/TR10.png rename to doc/assets/TR10.png diff --git a/doc/00_assets/TR11.png b/doc/assets/TR11.png similarity index 100% rename from doc/00_assets/TR11.png rename to doc/assets/TR11.png diff --git a/doc/00_assets/TR12.png b/doc/assets/TR12.png similarity index 100% rename from doc/00_assets/TR12.png rename to doc/assets/TR12.png diff --git a/doc/00_assets/TR14.png b/doc/assets/TR14.png similarity index 100% rename from doc/00_assets/TR14.png rename to doc/assets/TR14.png diff --git a/doc/00_assets/TR15.png b/doc/assets/TR15.png similarity index 100% rename from doc/00_assets/TR15.png rename to doc/assets/TR15.png diff --git a/doc/00_assets/TR16.png b/doc/assets/TR16.png similarity index 100% rename from doc/00_assets/TR16.png rename to doc/assets/TR16.png diff --git a/doc/00_assets/TR17.png b/doc/assets/TR17.png similarity index 100% rename from doc/00_assets/TR17.png rename to doc/assets/TR17.png diff --git a/doc/00_assets/TR18.png b/doc/assets/TR18.png similarity index 100% rename from doc/00_assets/TR18.png rename to doc/assets/TR18.png diff --git a/doc/00_assets/TR19.png b/doc/assets/TR19.png similarity index 100% rename from doc/00_assets/TR19.png rename to doc/assets/TR19.png diff --git a/doc/00_assets/TR20.png b/doc/assets/TR20.png similarity index 100% rename from doc/00_assets/TR20.png rename to doc/assets/TR20.png diff --git a/doc/00_assets/TR21.png b/doc/assets/TR21.png similarity index 100% rename from doc/00_assets/TR21.png rename to doc/assets/TR21.png diff --git a/doc/00_assets/TR22.png b/doc/assets/TR22.png similarity index 100% rename from doc/00_assets/TR22.png rename to doc/assets/TR22.png diff --git a/doc/00_assets/TR23.png b/doc/assets/TR23.png similarity index 100% rename from doc/00_assets/TR23.png rename to doc/assets/TR23.png diff --git a/doc/00_assets/Traffic_SM.png b/doc/assets/Traffic_SM.png similarity index 100% rename from doc/00_assets/Traffic_SM.png rename to doc/assets/Traffic_SM.png diff --git a/doc/00_assets/acting/Architecture_Acting.png b/doc/assets/acting/Architecture_Acting.png similarity index 100% rename from doc/00_assets/acting/Architecture_Acting.png rename to doc/assets/acting/Architecture_Acting.png diff --git a/doc/00_assets/acting/Steering_PurePursuit.png b/doc/assets/acting/Steering_PurePursuit.png similarity index 100% rename from doc/00_assets/acting/Steering_PurePursuit.png rename to doc/assets/acting/Steering_PurePursuit.png diff --git a/doc/00_assets/acting/Steering_PurePursuit_Tuning.png b/doc/assets/acting/Steering_PurePursuit_Tuning.png similarity index 100% rename from doc/00_assets/acting/Steering_PurePursuit_Tuning.png rename to doc/assets/acting/Steering_PurePursuit_Tuning.png diff --git a/doc/00_assets/acting/Steering_Stanley.png b/doc/assets/acting/Steering_Stanley.png similarity index 100% rename from doc/00_assets/acting/Steering_Stanley.png rename to doc/assets/acting/Steering_Stanley.png diff --git a/doc/00_assets/acting/Steering_Stanley_ComparedToPurePur.png b/doc/assets/acting/Steering_Stanley_ComparedToPurePur.png similarity index 100% rename from doc/00_assets/acting/Steering_Stanley_ComparedToPurePur.png rename to doc/assets/acting/Steering_Stanley_ComparedToPurePur.png diff --git a/doc/00_assets/acting/VelContr_PID_BrakingWithThrottlePID.png b/doc/assets/acting/VelContr_PID_BrakingWithThrottlePID.png similarity index 100% rename from doc/00_assets/acting/VelContr_PID_BrakingWithThrottlePID.png rename to doc/assets/acting/VelContr_PID_BrakingWithThrottlePID.png diff --git a/doc/00_assets/acting/VelContr_PID_StepResponse.png b/doc/assets/acting/VelContr_PID_StepResponse.png similarity index 100% rename from doc/00_assets/acting/VelContr_PID_StepResponse.png rename to doc/assets/acting/VelContr_PID_StepResponse.png diff --git a/doc/00_assets/acting/VelContr_PID_differentVelocities.png b/doc/assets/acting/VelContr_PID_differentVelocities.png similarity index 100% rename from doc/00_assets/acting/VelContr_PID_differentVelocities.png rename to doc/assets/acting/VelContr_PID_differentVelocities.png diff --git a/doc/00_assets/acting/emergency_brake_stats_graph.png b/doc/assets/acting/emergency_brake_stats_graph.png similarity index 100% rename from doc/00_assets/acting/emergency_brake_stats_graph.png rename to doc/assets/acting/emergency_brake_stats_graph.png diff --git a/doc/00_assets/acting/main_frame_publisher_bug.png b/doc/assets/acting/main_frame_publisher_bug.png similarity index 100% rename from doc/00_assets/acting/main_frame_publisher_bug.png rename to doc/assets/acting/main_frame_publisher_bug.png diff --git a/doc/00_assets/behaviour_tree.png b/doc/assets/behaviour_tree.png similarity index 100% rename from doc/00_assets/behaviour_tree.png rename to doc/assets/behaviour_tree.png diff --git a/doc/00_assets/berechnungsmodell.png b/doc/assets/berechnungsmodell.png similarity index 100% rename from doc/00_assets/berechnungsmodell.png rename to doc/assets/berechnungsmodell.png diff --git a/doc/00_assets/branch_overview.png b/doc/assets/branch_overview.png similarity index 100% rename from doc/00_assets/branch_overview.png rename to doc/assets/branch_overview.png diff --git a/doc/00_assets/bug_template.png b/doc/assets/bug_template.png similarity index 100% rename from doc/00_assets/bug_template.png rename to doc/assets/bug_template.png diff --git a/doc/00_assets/create_issue.png b/doc/assets/create_issue.png similarity index 100% rename from doc/00_assets/create_issue.png rename to doc/assets/create_issue.png diff --git a/doc/00_assets/distance_visualization.png b/doc/assets/distance_visualization.png similarity index 100% rename from doc/00_assets/distance_visualization.png rename to doc/assets/distance_visualization.png diff --git a/doc/00_assets/efficientps_structure.png b/doc/assets/efficientps_structure.png similarity index 100% rename from doc/00_assets/efficientps_structure.png rename to doc/assets/efficientps_structure.png diff --git a/doc/00_assets/fahrzeugapproximation.png b/doc/assets/fahrzeugapproximation.png similarity index 100% rename from doc/00_assets/fahrzeugapproximation.png rename to doc/assets/fahrzeugapproximation.png diff --git a/doc/00_assets/fahrzeugpositionsberechnung.png b/doc/assets/fahrzeugpositionsberechnung.png similarity index 100% rename from doc/00_assets/fahrzeugpositionsberechnung.png rename to doc/assets/fahrzeugpositionsberechnung.png diff --git a/doc/00_assets/fahrzeugwinkelberechnung.png b/doc/assets/fahrzeugwinkelberechnung.png similarity index 100% rename from doc/00_assets/fahrzeugwinkelberechnung.png rename to doc/assets/fahrzeugwinkelberechnung.png diff --git a/doc/00_assets/feature_template.png b/doc/assets/feature_template.png similarity index 100% rename from doc/00_assets/feature_template.png rename to doc/assets/feature_template.png diff --git a/doc/00_assets/filter_img/avg_10_w_0_500.png b/doc/assets/filter_img/avg_10_w_0_500.png similarity index 100% rename from doc/00_assets/filter_img/avg_10_w_0_500.png rename to doc/assets/filter_img/avg_10_w_0_500.png diff --git a/doc/00_assets/filter_img/avg_10_w_0_750.png b/doc/assets/filter_img/avg_10_w_0_750.png similarity index 100% rename from doc/00_assets/filter_img/avg_10_w_0_750.png rename to doc/assets/filter_img/avg_10_w_0_750.png diff --git a/doc/00_assets/filter_img/avg_10_w_1_000.png b/doc/assets/filter_img/avg_10_w_1_000.png similarity index 100% rename from doc/00_assets/filter_img/avg_10_w_1_000.png rename to doc/assets/filter_img/avg_10_w_1_000.png diff --git a/doc/00_assets/filter_img/avg_1_w_0_500.png b/doc/assets/filter_img/avg_1_w_0_500.png similarity index 100% rename from doc/00_assets/filter_img/avg_1_w_0_500.png rename to doc/assets/filter_img/avg_1_w_0_500.png diff --git a/doc/00_assets/filter_img/avg_1_w_0_750.png b/doc/assets/filter_img/avg_1_w_0_750.png similarity index 100% rename from doc/00_assets/filter_img/avg_1_w_0_750.png rename to doc/assets/filter_img/avg_1_w_0_750.png diff --git a/doc/00_assets/filter_img/avg_1_w_1_000.png b/doc/assets/filter_img/avg_1_w_1_000.png similarity index 100% rename from doc/00_assets/filter_img/avg_1_w_1_000.png rename to doc/assets/filter_img/avg_1_w_1_000.png diff --git a/doc/00_assets/filter_img/avg_20_w_0_750.png b/doc/assets/filter_img/avg_20_w_0_750.png similarity index 100% rename from doc/00_assets/filter_img/avg_20_w_0_750.png rename to doc/assets/filter_img/avg_20_w_0_750.png diff --git a/doc/00_assets/filter_img/avg_7_w_0_500.png b/doc/assets/filter_img/avg_7_w_0_500.png similarity index 100% rename from doc/00_assets/filter_img/avg_7_w_0_500.png rename to doc/assets/filter_img/avg_7_w_0_500.png diff --git a/doc/00_assets/filter_img/avg_7_w_0_750.png b/doc/assets/filter_img/avg_7_w_0_750.png similarity index 100% rename from doc/00_assets/filter_img/avg_7_w_0_750.png rename to doc/assets/filter_img/avg_7_w_0_750.png diff --git a/doc/00_assets/filter_img/avg_7_w_1_000.png b/doc/assets/filter_img/avg_7_w_1_000.png similarity index 100% rename from doc/00_assets/filter_img/avg_7_w_1_000.png rename to doc/assets/filter_img/avg_7_w_1_000.png diff --git a/doc/00_assets/filter_img/rolling_avg_1.png b/doc/assets/filter_img/rolling_avg_1.png similarity index 100% rename from doc/00_assets/filter_img/rolling_avg_1.png rename to doc/assets/filter_img/rolling_avg_1.png diff --git a/doc/00_assets/filter_img/rolling_avg_10.png b/doc/assets/filter_img/rolling_avg_10.png similarity index 100% rename from doc/00_assets/filter_img/rolling_avg_10.png rename to doc/assets/filter_img/rolling_avg_10.png diff --git a/doc/00_assets/filter_img/rolling_avg_20.png b/doc/assets/filter_img/rolling_avg_20.png similarity index 100% rename from doc/00_assets/filter_img/rolling_avg_20.png rename to doc/assets/filter_img/rolling_avg_20.png diff --git a/doc/00_assets/filter_img/rolling_avg_5.png b/doc/assets/filter_img/rolling_avg_5.png similarity index 100% rename from doc/00_assets/filter_img/rolling_avg_5.png rename to doc/assets/filter_img/rolling_avg_5.png diff --git a/doc/00_assets/gdrive-paf.png b/doc/assets/gdrive-paf.png similarity index 100% rename from doc/00_assets/gdrive-paf.png rename to doc/assets/gdrive-paf.png diff --git a/doc/00_assets/gdrive-permissions.png b/doc/assets/gdrive-permissions.png similarity index 100% rename from doc/00_assets/gdrive-permissions.png rename to doc/assets/gdrive-permissions.png diff --git a/doc/00_assets/gewinnerteam19-architektur.png b/doc/assets/gewinnerteam19-architektur.png similarity index 100% rename from doc/00_assets/gewinnerteam19-architektur.png rename to doc/assets/gewinnerteam19-architektur.png diff --git a/doc/00_assets/git-flow.svg b/doc/assets/git-flow.svg similarity index 100% rename from doc/00_assets/git-flow.svg rename to doc/assets/git-flow.svg diff --git a/doc/00_assets/github-action-md.png b/doc/assets/github-action-md.png similarity index 100% rename from doc/00_assets/github-action-md.png rename to doc/assets/github-action-md.png diff --git a/doc/00_assets/github-action-py.png b/doc/assets/github-action-py.png similarity index 100% rename from doc/00_assets/github-action-py.png rename to doc/assets/github-action-py.png diff --git a/doc/00_assets/github_create_a_branch.png b/doc/assets/github_create_a_branch.png similarity index 100% rename from doc/00_assets/github_create_a_branch.png rename to doc/assets/github_create_a_branch.png diff --git a/doc/00_assets/global_trajectory.png b/doc/assets/global_trajectory.png similarity index 100% rename from doc/00_assets/global_trajectory.png rename to doc/assets/global_trajectory.png diff --git a/doc/00_assets/gnss_ohne_rolling_average.png b/doc/assets/gnss_ohne_rolling_average.png similarity index 100% rename from doc/00_assets/gnss_ohne_rolling_average.png rename to doc/assets/gnss_ohne_rolling_average.png diff --git a/doc/00_assets/implementation_plan_perception.jpg b/doc/assets/implementation_plan_perception.jpg similarity index 100% rename from doc/00_assets/implementation_plan_perception.jpg rename to doc/assets/implementation_plan_perception.jpg diff --git a/doc/00_assets/intersection.png b/doc/assets/intersection.png similarity index 100% rename from doc/00_assets/intersection.png rename to doc/assets/intersection.png diff --git a/doc/00_assets/intersection_2.png b/doc/assets/intersection_2.png similarity index 100% rename from doc/00_assets/intersection_2.png rename to doc/assets/intersection_2.png diff --git a/doc/00_assets/issue_wizard.png b/doc/assets/issue_wizard.png similarity index 100% rename from doc/00_assets/issue_wizard.png rename to doc/assets/issue_wizard.png diff --git a/doc/00_assets/junction.png b/doc/assets/junction.png similarity index 100% rename from doc/00_assets/junction.png rename to doc/assets/junction.png diff --git a/doc/00_assets/kollisionsberechnung.png b/doc/assets/kollisionsberechnung.png similarity index 100% rename from doc/00_assets/kollisionsberechnung.png rename to doc/assets/kollisionsberechnung.png diff --git a/doc/00_assets/kreuzungszonen.png b/doc/assets/kreuzungszonen.png similarity index 100% rename from doc/00_assets/kreuzungszonen.png rename to doc/assets/kreuzungszonen.png diff --git a/doc/00_assets/lane_midpoint.png b/doc/assets/lane_midpoint.png similarity index 100% rename from doc/00_assets/lane_midpoint.png rename to doc/assets/lane_midpoint.png diff --git a/doc/00_assets/leaderboard-1.png b/doc/assets/leaderboard-1.png similarity index 100% rename from doc/00_assets/leaderboard-1.png rename to doc/assets/leaderboard-1.png diff --git a/doc/00_assets/leaderboard-2.png b/doc/assets/leaderboard-2.png similarity index 100% rename from doc/00_assets/leaderboard-2.png rename to doc/assets/leaderboard-2.png diff --git a/doc/00_assets/legend_bt.png b/doc/assets/legend_bt.png similarity index 100% rename from doc/00_assets/legend_bt.png rename to doc/assets/legend_bt.png diff --git a/doc/00_assets/lidar_filter.png b/doc/assets/lidar_filter.png similarity index 100% rename from doc/00_assets/lidar_filter.png rename to doc/assets/lidar_filter.png diff --git a/doc/00_assets/lidarhinderniserkennung.png b/doc/assets/lidarhinderniserkennung.png similarity index 100% rename from doc/00_assets/lidarhinderniserkennung.png rename to doc/assets/lidarhinderniserkennung.png diff --git a/doc/00_assets/local_trajectory.png b/doc/assets/local_trajectory.png similarity index 100% rename from doc/00_assets/local_trajectory.png rename to doc/assets/local_trajectory.png diff --git a/doc/00_assets/multi_lane.png b/doc/assets/multi_lane.png similarity index 100% rename from doc/00_assets/multi_lane.png rename to doc/assets/multi_lane.png diff --git a/doc/00_assets/nvcc_version.png b/doc/assets/nvcc_version.png similarity index 100% rename from doc/00_assets/nvcc_version.png rename to doc/assets/nvcc_version.png diff --git a/doc/00_assets/occupancygrid.png b/doc/assets/occupancygrid.png similarity index 100% rename from doc/00_assets/occupancygrid.png rename to doc/assets/occupancygrid.png diff --git a/doc/00_assets/optimierungsvisualisierung.png b/doc/assets/optimierungsvisualisierung.png similarity index 100% rename from doc/00_assets/optimierungsvisualisierung.png rename to doc/assets/optimierungsvisualisierung.png diff --git a/doc/00_assets/overtaking_overview.png b/doc/assets/overtaking_overview.png similarity index 100% rename from doc/00_assets/overtaking_overview.png rename to doc/assets/overtaking_overview.png diff --git a/doc/00_assets/overview.jpg b/doc/assets/overview.jpg similarity index 100% rename from doc/00_assets/overview.jpg rename to doc/assets/overview.jpg diff --git a/doc/00_assets/perception/adding_new_position_methods.png b/doc/assets/perception/adding_new_position_methods.png similarity index 100% rename from doc/00_assets/perception/adding_new_position_methods.png rename to doc/assets/perception/adding_new_position_methods.png diff --git a/doc/00_assets/perception/data_26_MAE_Boxed.png b/doc/assets/perception/data_26_MAE_Boxed.png similarity index 100% rename from doc/00_assets/perception/data_26_MAE_Boxed.png rename to doc/assets/perception/data_26_MAE_Boxed.png diff --git a/doc/00_assets/perception/data_26_MSE_Boxed.png b/doc/assets/perception/data_26_MSE_Boxed.png similarity index 100% rename from doc/00_assets/perception/data_26_MSE_Boxed.png rename to doc/assets/perception/data_26_MSE_Boxed.png diff --git a/doc/00_assets/perception/kalman_installation_guide.png b/doc/assets/perception/kalman_installation_guide.png similarity index 100% rename from doc/00_assets/perception/kalman_installation_guide.png rename to doc/assets/perception/kalman_installation_guide.png diff --git a/doc/00_assets/perception/modular_subscriber_example.png b/doc/assets/perception/modular_subscriber_example.png similarity index 100% rename from doc/00_assets/perception/modular_subscriber_example.png rename to doc/assets/perception/modular_subscriber_example.png diff --git a/doc/00_assets/perception/new_heading_pub_example.png b/doc/assets/perception/new_heading_pub_example.png similarity index 100% rename from doc/00_assets/perception/new_heading_pub_example.png rename to doc/assets/perception/new_heading_pub_example.png diff --git a/doc/00_assets/perception/non_linear_kalman_example.png b/doc/assets/perception/non_linear_kalman_example.png similarity index 100% rename from doc/00_assets/perception/non_linear_kalman_example.png rename to doc/assets/perception/non_linear_kalman_example.png diff --git a/doc/00_assets/perception/quat_to_angle.png b/doc/assets/perception/quat_to_angle.png similarity index 100% rename from doc/00_assets/perception/quat_to_angle.png rename to doc/assets/perception/quat_to_angle.png diff --git a/doc/00_assets/perception/sensor_debug_change.png b/doc/assets/perception/sensor_debug_change.png similarity index 100% rename from doc/00_assets/perception/sensor_debug_change.png rename to doc/assets/perception/sensor_debug_change.png diff --git a/doc/00_assets/perception/sensor_debug_data_saving.png b/doc/assets/perception/sensor_debug_data_saving.png similarity index 100% rename from doc/00_assets/perception/sensor_debug_data_saving.png rename to doc/assets/perception/sensor_debug_data_saving.png diff --git a/doc/00_assets/perception/sensor_debug_viz_config.png b/doc/assets/perception/sensor_debug_viz_config.png similarity index 100% rename from doc/00_assets/perception/sensor_debug_viz_config.png rename to doc/assets/perception/sensor_debug_viz_config.png diff --git "a/doc/00_assets/planning \303\274bersicht.png" "b/doc/assets/planning \303\274bersicht.png" similarity index 100% rename from "doc/00_assets/planning \303\274bersicht.png" rename to "doc/assets/planning \303\274bersicht.png" diff --git a/doc/00_assets/planning/BT_paper.png b/doc/assets/planning/BT_paper.png similarity index 100% rename from doc/00_assets/planning/BT_paper.png rename to doc/assets/planning/BT_paper.png diff --git a/doc/00_assets/planning/BehaviorTree_medium.png b/doc/assets/planning/BehaviorTree_medium.png similarity index 100% rename from doc/00_assets/planning/BehaviorTree_medium.png rename to doc/assets/planning/BehaviorTree_medium.png diff --git a/doc/00_assets/planning/Globalplan.png b/doc/assets/planning/Globalplan.png similarity index 100% rename from doc/00_assets/planning/Globalplan.png rename to doc/assets/planning/Globalplan.png diff --git a/doc/00_assets/planning/Overtake_car_trajectory.png b/doc/assets/planning/Overtake_car_trajectory.png similarity index 100% rename from doc/00_assets/planning/Overtake_car_trajectory.png rename to doc/assets/planning/Overtake_car_trajectory.png diff --git a/doc/00_assets/planning/Planning.png b/doc/assets/planning/Planning.png similarity index 100% rename from doc/00_assets/planning/Planning.png rename to doc/assets/planning/Planning.png diff --git a/doc/00_assets/planning/Planning_architecture.png b/doc/assets/planning/Planning_architecture.png similarity index 100% rename from doc/00_assets/planning/Planning_architecture.png rename to doc/assets/planning/Planning_architecture.png diff --git a/doc/00_assets/planning/Planning_paf21.png b/doc/assets/planning/Planning_paf21.png similarity index 100% rename from doc/00_assets/planning/Planning_paf21.png rename to doc/assets/planning/Planning_paf21.png diff --git a/doc/00_assets/planning/collision_check.png b/doc/assets/planning/collision_check.png similarity index 100% rename from doc/00_assets/planning/collision_check.png rename to doc/assets/planning/collision_check.png diff --git a/doc/00_assets/planning/intersection_scenario.png b/doc/assets/planning/intersection_scenario.png similarity index 100% rename from doc/00_assets/planning/intersection_scenario.png rename to doc/assets/planning/intersection_scenario.png diff --git a/doc/00_assets/planning/localplan.png b/doc/assets/planning/localplan.png similarity index 100% rename from doc/00_assets/planning/localplan.png rename to doc/assets/planning/localplan.png diff --git a/doc/00_assets/planning/overtaking_scenario.png b/doc/assets/planning/overtaking_scenario.png similarity index 100% rename from doc/00_assets/planning/overtaking_scenario.png rename to doc/assets/planning/overtaking_scenario.png diff --git a/doc/00_assets/planning/overview.jpg b/doc/assets/planning/overview.jpg similarity index 100% rename from doc/00_assets/planning/overview.jpg rename to doc/assets/planning/overview.jpg diff --git a/doc/00_assets/planning/overview.png b/doc/assets/planning/overview.png similarity index 100% rename from doc/00_assets/planning/overview.png rename to doc/assets/planning/overview.png diff --git a/doc/00_assets/planning/overview_paper1.png b/doc/assets/planning/overview_paper1.png similarity index 100% rename from doc/00_assets/planning/overview_paper1.png rename to doc/assets/planning/overview_paper1.png diff --git a/doc/00_assets/planning/plot_full_trajectory_1_degree.png b/doc/assets/planning/plot_full_trajectory_1_degree.png similarity index 100% rename from doc/00_assets/planning/plot_full_trajectory_1_degree.png rename to doc/assets/planning/plot_full_trajectory_1_degree.png diff --git a/doc/00_assets/planning/prios.png b/doc/assets/planning/prios.png similarity index 100% rename from doc/00_assets/planning/prios.png rename to doc/assets/planning/prios.png diff --git a/doc/00_assets/planning/simple_final_tree.png b/doc/assets/planning/simple_final_tree.png similarity index 100% rename from doc/00_assets/planning/simple_final_tree.png rename to doc/assets/planning/simple_final_tree.png diff --git a/doc/00_assets/planning/test_frenet_results.png b/doc/assets/planning/test_frenet_results.png similarity index 100% rename from doc/00_assets/planning/test_frenet_results.png rename to doc/assets/planning/test_frenet_results.png diff --git a/doc/00_assets/planning/three_scenarios.png b/doc/assets/planning/three_scenarios.png similarity index 100% rename from doc/00_assets/planning/three_scenarios.png rename to doc/assets/planning/three_scenarios.png diff --git a/doc/00_assets/planning/vector_calculation.png b/doc/assets/planning/vector_calculation.png similarity index 100% rename from doc/00_assets/planning/vector_calculation.png rename to doc/assets/planning/vector_calculation.png diff --git a/doc/00_assets/planning/vision_objects_filter_cc.png b/doc/assets/planning/vision_objects_filter_cc.png similarity index 100% rename from doc/00_assets/planning/vision_objects_filter_cc.png rename to doc/assets/planning/vision_objects_filter_cc.png diff --git a/doc/00_assets/positionsvektor.png b/doc/assets/positionsvektor.png similarity index 100% rename from doc/00_assets/positionsvektor.png rename to doc/assets/positionsvektor.png diff --git a/doc/00_assets/preplanning_start.png b/doc/assets/preplanning_start.png similarity index 100% rename from doc/00_assets/preplanning_start.png rename to doc/assets/preplanning_start.png diff --git a/doc/00_assets/pytree_PAF_status.drawio.png b/doc/assets/pytree_PAF_status.drawio.png similarity index 100% rename from doc/00_assets/pytree_PAF_status.drawio.png rename to doc/assets/pytree_PAF_status.drawio.png diff --git a/doc/00_assets/reference.png b/doc/assets/reference.png similarity index 100% rename from doc/00_assets/reference.png rename to doc/assets/reference.png diff --git a/doc/00_assets/reference_xodr.png b/doc/assets/reference_xodr.png similarity index 100% rename from doc/00_assets/reference_xodr.png rename to doc/assets/reference_xodr.png diff --git a/doc/00_assets/research_assets/bicyclegeometry.png b/doc/assets/research_assets/bicyclegeometry.png similarity index 100% rename from doc/00_assets/research_assets/bicyclegeometry.png rename to doc/assets/research_assets/bicyclegeometry.png diff --git a/doc/00_assets/research_assets/chattering.gif b/doc/assets/research_assets/chattering.gif similarity index 100% rename from doc/00_assets/research_assets/chattering.gif rename to doc/assets/research_assets/chattering.gif diff --git a/doc/00_assets/research_assets/curve_detection_paf21_1.png b/doc/assets/research_assets/curve_detection_paf21_1.png similarity index 100% rename from doc/00_assets/research_assets/curve_detection_paf21_1.png rename to doc/assets/research_assets/curve_detection_paf21_1.png diff --git a/doc/00_assets/research_assets/danglingcarrotgeometry.png b/doc/assets/research_assets/danglingcarrotgeometry.png similarity index 100% rename from doc/00_assets/research_assets/danglingcarrotgeometry.png rename to doc/assets/research_assets/danglingcarrotgeometry.png diff --git a/doc/00_assets/research_assets/messages_paf21_2.png b/doc/assets/research_assets/messages_paf21_2.png similarity index 100% rename from doc/00_assets/research_assets/messages_paf21_2.png rename to doc/assets/research_assets/messages_paf21_2.png diff --git a/doc/00_assets/research_assets/mpc.png b/doc/assets/research_assets/mpc.png similarity index 100% rename from doc/00_assets/research_assets/mpc.png rename to doc/assets/research_assets/mpc.png diff --git a/doc/00_assets/research_assets/pure_pursuit.png b/doc/assets/research_assets/pure_pursuit.png similarity index 100% rename from doc/00_assets/research_assets/pure_pursuit.png rename to doc/assets/research_assets/pure_pursuit.png diff --git a/doc/00_assets/research_assets/standard_routine_paf21_2.png b/doc/assets/research_assets/standard_routine_paf21_2.png similarity index 100% rename from doc/00_assets/research_assets/standard_routine_paf21_2.png rename to doc/assets/research_assets/standard_routine_paf21_2.png diff --git a/doc/00_assets/research_assets/stanley_controller.png b/doc/assets/research_assets/stanley_controller.png similarity index 100% rename from doc/00_assets/research_assets/stanley_controller.png rename to doc/assets/research_assets/stanley_controller.png diff --git a/doc/00_assets/research_assets/stanley_paf21_1.png b/doc/assets/research_assets/stanley_paf21_1.png similarity index 100% rename from doc/00_assets/research_assets/stanley_paf21_1.png rename to doc/assets/research_assets/stanley_paf21_1.png diff --git a/doc/00_assets/research_assets/stanleyerror.png b/doc/assets/research_assets/stanleyerror.png similarity index 100% rename from doc/00_assets/research_assets/stanleyerror.png rename to doc/assets/research_assets/stanleyerror.png diff --git a/doc/00_assets/road_option.png b/doc/assets/road_option.png similarity index 100% rename from doc/00_assets/road_option.png rename to doc/assets/road_option.png diff --git a/doc/00_assets/road_options_concept.png b/doc/assets/road_options_concept.png similarity index 100% rename from doc/00_assets/road_options_concept.png rename to doc/assets/road_options_concept.png diff --git a/doc/00_assets/roads_vis.png b/doc/assets/roads_vis.png similarity index 100% rename from doc/00_assets/roads_vis.png rename to doc/assets/roads_vis.png diff --git a/doc/00_assets/segmentation.png b/doc/assets/segmentation.png similarity index 100% rename from doc/00_assets/segmentation.png rename to doc/assets/segmentation.png diff --git a/doc/00_assets/sensoranordnung.png b/doc/assets/sensoranordnung.png similarity index 100% rename from doc/00_assets/sensoranordnung.png rename to doc/assets/sensoranordnung.png diff --git a/doc/00_assets/statemachines.png b/doc/assets/statemachines.png similarity index 100% rename from doc/00_assets/statemachines.png rename to doc/assets/statemachines.png diff --git a/doc/00_assets/top-level.png b/doc/assets/top-level.png similarity index 100% rename from doc/00_assets/top-level.png rename to doc/assets/top-level.png diff --git a/doc/00_assets/trajectory_roads.png b/doc/assets/trajectory_roads.png similarity index 100% rename from doc/00_assets/trajectory_roads.png rename to doc/assets/trajectory_roads.png diff --git a/doc/00_assets/trajekorienfehlermin.png b/doc/assets/trajekorienfehlermin.png similarity index 100% rename from doc/00_assets/trajekorienfehlermin.png rename to doc/assets/trajekorienfehlermin.png diff --git a/doc/00_assets/trajektorienberechnung.png b/doc/assets/trajektorienberechnung.png similarity index 100% rename from doc/00_assets/trajektorienberechnung.png rename to doc/assets/trajektorienberechnung.png diff --git a/doc/00_assets/vulkan_device_not_available.png b/doc/assets/vulkan_device_not_available.png similarity index 100% rename from doc/00_assets/vulkan_device_not_available.png rename to doc/assets/vulkan_device_not_available.png diff --git a/doc/08_dev_talks/paf23/sprint_1.md b/doc/dev_talks/paf23/sprint_1.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_1.md rename to doc/dev_talks/paf23/sprint_1.md diff --git a/doc/08_dev_talks/paf23/sprint_2.md b/doc/dev_talks/paf23/sprint_2.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_2.md rename to doc/dev_talks/paf23/sprint_2.md diff --git a/doc/08_dev_talks/paf23/sprint_3.md b/doc/dev_talks/paf23/sprint_3.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_3.md rename to doc/dev_talks/paf23/sprint_3.md diff --git a/doc/08_dev_talks/paf23/sprint_4.md b/doc/dev_talks/paf23/sprint_4.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_4.md rename to doc/dev_talks/paf23/sprint_4.md diff --git a/doc/08_dev_talks/paf23/sprint_5.md b/doc/dev_talks/paf23/sprint_5.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_5.md rename to doc/dev_talks/paf23/sprint_5.md diff --git a/doc/08_dev_talks/paf23/sprint_6.md b/doc/dev_talks/paf23/sprint_6.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_6.md rename to doc/dev_talks/paf23/sprint_6.md diff --git a/doc/08_dev_talks/paf23/sprint_7.md b/doc/dev_talks/paf23/sprint_7.md similarity index 100% rename from doc/08_dev_talks/paf23/sprint_7.md rename to doc/dev_talks/paf23/sprint_7.md diff --git a/doc/08_dev_talks/paf24/mermaid_paf24.md b/doc/dev_talks/paf24/mermaid_paf24.md similarity index 100% rename from doc/08_dev_talks/paf24/mermaid_paf24.md rename to doc/dev_talks/paf24/mermaid_paf24.md diff --git a/doc/08_dev_talks/paf24/student_roles24.md b/doc/dev_talks/paf24/student_roles24.md similarity index 100% rename from doc/08_dev_talks/paf24/student_roles24.md rename to doc/dev_talks/paf24/student_roles24.md diff --git a/doc/02_development/Readme.md b/doc/development/Readme.md similarity index 67% rename from doc/02_development/Readme.md rename to doc/development/Readme.md index d4a35baf..0d4e7bde 100644 --- a/doc/02_development/Readme.md +++ b/doc/development/Readme.md @@ -3,22 +3,22 @@ If you contribute to this project please read the following guidelines first: 1. [Start the docker container to simulate the car](../../build/README.md) -2. [Documentation Requirements](./13_documentation_requirements.md) -3. [Commit](./03_commit.md) -4. [Linting](./02_linting.md) -5. [Coding style](./04_coding_style.md) -6. [Git Style](./05_git_workflow.md) -7. [Reviewing](./07_review_guideline.md) -8. [Project management](./08_project_management.md) +2. [Documentation Requirements](./documentation_requirements.md) +3. [Commit](./commit.md) +4. [Linting](./linting.md) +5. [Coding style](./coding_style.md) +6. [Git Style](./git_workflow.md) +7. [Reviewing](./review_guideline.md) +8. [Project management](./project_management.md) 9. Github actions - 1. [linting action](./09_linter_action.md) - 2. [build action](./10_build_action.md) -10. [Install python packages](./10_installing_python_packages.md) -11. [Discord Webhook Documentation](./12_discord_webhook.md) + 1. [linting action](./linter_action.md) + 2. [build action](./build_action.md) +10. [Install python packages](./installing_python_packages.md) +11. [Discord Webhook Documentation](./discord_webhook.md) ## Templates -Some templates are provided in [`doc/02_development/templates`](./templates). +Some templates are provided in [`doc/development/templates`](./templates). ### [`template_class.py`](./templates/template_class.py) @@ -42,4 +42,4 @@ This template functions a template for who to build knowledge articles for every ## Discord Webhook -[Discord Webhook Documentation](./12_discord_webhook.md) +[Discord Webhook Documentation](./discord_webhook.md) diff --git a/doc/02_development/10_build_action.md b/doc/development/build_action.md similarity index 100% rename from doc/02_development/10_build_action.md rename to doc/development/build_action.md diff --git a/doc/02_development/04_coding_style.md b/doc/development/coding_style.md similarity index 100% rename from doc/02_development/04_coding_style.md rename to doc/development/coding_style.md diff --git a/doc/02_development/12_discord_webhook.md b/doc/development/discord_webhook.md similarity index 100% rename from doc/02_development/12_discord_webhook.md rename to doc/development/discord_webhook.md diff --git a/doc/02_development/14_distributed_simulation.md b/doc/development/distributed_simulation.md similarity index 100% rename from doc/02_development/14_distributed_simulation.md rename to doc/development/distributed_simulation.md diff --git a/doc/02_development/13_documentation_requirements.md b/doc/development/documentation_requirements.md similarity index 95% rename from doc/02_development/13_documentation_requirements.md rename to doc/development/documentation_requirements.md index 4b06ab81..a1f1dcb6 100644 --- a/doc/02_development/13_documentation_requirements.md +++ b/doc/development/documentation_requirements.md @@ -12,8 +12,8 @@ Lennart Luttkus 1. **Readability and Maintainability:** - **Consistent Formatting:** Code should follow a consistent and readable formatting style. Tools like linters or formatters can help enforce a consistent code style. - - [02_linting](./02_linting.md) - - [04_coding_style](./04_coding_style.md) + - [linting](./linting.md) + - [coding_style](./coding_style.md) - **Meaningful Names:** Variable and function names should be descriptive and convey the purpose of the code. - **Comments:** Clear and concise comments should be used where necessary to explain complex logic or provide context. 2. **Code Structure:** @@ -36,7 +36,7 @@ Lennart Luttkus - **README Files:** Include a well-written README file that provides an overview of the project, installation instructions, and usage examples. 8. **Version Control:** - **Commit Messages:** Use descriptive and meaningful commit messages to track changes effectively. - - [03_commit](./03_commit.md) + - [commit](./commit.md) - **Branching Strategy:** Follow a consistent and well-defined branching strategy to manage code changes. 9. **Scalability:** - **Avoid Hardcoding:** Parameterize values that might change, making it easier to scale the application. diff --git a/doc/02_development/11_dvc.md b/doc/development/dvc.md similarity index 96% rename from doc/02_development/11_dvc.md rename to doc/development/dvc.md index 61436128..acd8462d 100644 --- a/doc/02_development/11_dvc.md +++ b/doc/development/dvc.md @@ -67,11 +67,11 @@ An administrator has to add your Google Account by doing the following. 1. Go to `https://drive.google.com` and login with our user 2. Click the folder `paf22`: -![paf22 folder](../00_assets/gdrive-paf.png) +![paf22 folder](../assets/gdrive-paf.png) 3. click on `Manage permissions` on the right side 4. Add the user as `Collaborator` -![paf22 folder](../00_assets/gdrive-permissions.png) +![paf22 folder](../assets/gdrive-permissions.png) ## Using DVC @@ -241,15 +241,15 @@ Storing a model file can be done the same way. > The commands below are not meant to execute, since the example is already added in git. > It should give a brief overview about how DVC works. -> However, the process is adaptable for any file or folder if you replace `doc/04_examples/dvc_example/dataset` with your path. +> However, the process is adaptable for any file or folder if you replace `doc/examples/dvc_example/dataset` with your path. -1. Add the folder `doc/04_examples/dvc_example/dataset` to DVC +1. Add the folder `doc/examples/dvc_example/dataset` to DVC ```shell - dvc add doc/04_examples/dvc_example/dataset + dvc add doc/examples/dvc_example/dataset ``` - > ❗️ if you already added the directory to git you have to remove it by running `git rm -r --cached 'doc/04_examples/dvc_example/dataset'` + > ❗️ if you already added the directory to git you have to remove it by running `git rm -r --cached 'doc/examples/dvc_example/dataset'` 2. Commit your changes in git diff --git a/doc/02_development/05_git_workflow.md b/doc/development/git_workflow.md similarity index 85% rename from doc/02_development/05_git_workflow.md rename to doc/development/git_workflow.md index 7e721190..a90a7d1e 100644 --- a/doc/02_development/05_git_workflow.md +++ b/doc/development/git_workflow.md @@ -28,6 +28,10 @@ Josef Kircher - [Git style](#git-style-1) - [Branch naming](#branch-naming) - [For example](#for-example) + - [Branch naming workflow](#branch-naming-workflow) + - [Branch Creation Settings](#branch-creation-settings) + - [Creating a Branch in the Web Interface](#creating-a-branch-in-the-web-interface) + - [Creating a Branch in VSCode](#creating-a-branch-in-vscode) - [Commit messages](#commit-messages) - [Git commands cheat sheet](#git-commands-cheat-sheet) - [Sources](#sources) @@ -37,7 +41,7 @@ Josef Kircher ### Git Feature Branch -![Git Feature](../00_assets/git-flow.svg) +![Git Feature](../assets/git-flow.svg) #### Branch strategy @@ -76,7 +80,7 @@ The `.vscode/settings.json` file in this repository contains settings that autom To create a branch in the web interface, follow these steps: -![Create Branch](../00_assets/github_create_a_branch.png) +![Create Branch](../assets/github_create_a_branch.png) #### Creating a Branch in VSCode @@ -90,7 +94,7 @@ In Visual Studio Code, use the "GitHub.vscode-pull-request-github" extension. --- -- proceed to [Commit Messages](./03_commit.md) +- proceed to [Commit Messages](./commit.md) ### Git commands cheat sheet diff --git a/doc/02_development/installing_cuda.md b/doc/development/installing_cuda.md similarity index 97% rename from doc/02_development/installing_cuda.md rename to doc/development/installing_cuda.md index 832325b1..566c3b1a 100644 --- a/doc/02_development/installing_cuda.md +++ b/doc/development/installing_cuda.md @@ -37,7 +37,7 @@ export LD_LIBRARY_PATH="/usr/local/cuda-x.y/lib64:$LD_LIBRARY_PATH" The path may be different depending on the system. You can get the path by executing ```which nvcc``` in the console. You can find your installed version of cuda-toolkit by executing ```nvcc-version```. The output should look like this: -![Implementation](../00_assets/nvcc_version.png) +![Implementation](../assets/nvcc_version.png) `release x.y` in the fourth column represents the version of the installed cuda-toolkit. diff --git a/doc/02_development/10_installing_python_packages.md b/doc/development/installing_python_packages.md similarity index 100% rename from doc/02_development/10_installing_python_packages.md rename to doc/development/installing_python_packages.md diff --git a/doc/02_development/09_linter_action.md b/doc/development/linter_action.md similarity index 93% rename from doc/02_development/09_linter_action.md rename to doc/development/linter_action.md index d041ba00..c71154dc 100644 --- a/doc/02_development/09_linter_action.md +++ b/doc/development/linter_action.md @@ -39,7 +39,7 @@ This is done by limiting the execution of the action by the following line: on: pull_request ``` -The actions uses the same linters described in the section [Linting](./02_linting.md). +The actions uses the same linters described in the section [Linting](./linting.md). Event though the linters are already executed during commit, the execution on pull request ensures that nobody skips the linter during commit. @@ -59,7 +59,7 @@ To enforce this behaviour, we set the action as requirement as described in the > > [(Source)](https://stackoverflow.com/questions/60776412/github-actions-is-there-a-way-to-make-it-mandatory-for-pull-request-to-merge) -More information about creating and merging pull requests can be found [here](./08_project_management.md). +More information about creating and merging pull requests can be found [here](./project_management.md). ## 🚨 Common Problems @@ -68,14 +68,14 @@ More information about creating and merging pull requests can be found [here](./ If there are errors in any file which need to be fixed, the output of the action will look similar to this: -![markdown lint error](../00_assets/github-action-md.png) +![markdown lint error](../assets/github-action-md.png) ### 2. Error in the python linter If there are errors in any python file, the output of the action will look similar to this: -![python lint error](../00_assets/github-action-py.png) +![python lint error](../assets/github-action-py.png) This step even runs if the markdown linter has already failed. This way, all errors of different steps are directly visible diff --git a/doc/02_development/02_linting.md b/doc/development/linting.md similarity index 100% rename from doc/02_development/02_linting.md rename to doc/development/linting.md diff --git a/doc/02_development/08_project_management.md b/doc/development/project_management.md similarity index 92% rename from doc/02_development/08_project_management.md rename to doc/development/project_management.md index 97821432..e9917e5f 100644 --- a/doc/02_development/08_project_management.md +++ b/doc/development/project_management.md @@ -42,11 +42,11 @@ Any bugs or features requests are managed in Github. Bugs or features can be added [here](https://github.com/ll7/paf22/issues/new/choose) or via the [issue overview](https://github.com/ll7/paf22/issues). -![create issue](../00_assets/create_issue.png) +![create issue](../assets/create_issue.png) By clicking "New issue" in the overview or using the direct link above a wizard guides you to the creation of an issue: -![issue wizard](../00_assets/issue_wizard.png) +![issue wizard](../assets/issue_wizard.png) The possibilities are described in the following sections. @@ -60,7 +60,7 @@ If something is not expected to work, but you want to have it, please refer to t The documentation says that the vehicle should detect about 90% of the traffic lights. However, for you it ignores almost all traffic lights. -![bug template](../00_assets/bug_template.png) +![bug template](../assets/bug_template.png) ### 💡 Feature @@ -71,7 +71,7 @@ Use this template if you want a new Feature which is not implemented yet. Currently, the vehicle can't make u-turns. Implementing the ability to perform u-turns would be a new feature. -![feature template](../00_assets/feature_template.png) +![feature template](../assets/feature_template.png) ### 🚗 Bug in CARLA Simulator @@ -84,11 +84,11 @@ CARLA simulator crashes on startup on your machine. ## Create a Pull Request To create a pull request, go to the [branches overview](https://github.com/ll7/paf22/branches) and select ``New Pull Request`` for the branch you want to create a PR for. -![img.png](../00_assets/branch_overview.png) +![img.png](../assets/branch_overview.png) Merge the pull request after the review process is complete and all the feedback from the reviewer has been worked in. -For more information about the review process, see [Review process](./07_review_guideline.md). +For more information about the review process, see [Review process](./review_guideline.md). ## Merging a Pull Request diff --git a/doc/02_development/07_review_guideline.md b/doc/development/review_guideline.md similarity index 92% rename from doc/02_development/07_review_guideline.md rename to doc/development/review_guideline.md index e42c4a17..dc0fc446 100644 --- a/doc/02_development/07_review_guideline.md +++ b/doc/development/review_guideline.md @@ -37,19 +37,19 @@ Josef Kircher ## How to review 1. Select der PR you want to review on GitHub -![img.png](../00_assets/PR_overview.png) +![img.png](../assets/PR_overview.png) 2. Go to Files Changed -![img.png](../00_assets/Files_Changed.png) +![img.png](../assets/Files_Changed.png) 3. Hover over the line where you want to add a comment and click on the blue `+` at the beginning of the line to add a comment -![img.png](../00_assets/Comment_PR.png) +![img.png](../assets/Comment_PR.png) 4. If you want to comment on multiple lines click and drag over these lines 5. In the comment field type your comment. How to write a good comment is handled in the next section. 6. You can also add a suggestion by using ``Ctrl+G`` or the small paper icon in the header line of the comment -![img.png](../00_assets/Suggestion.png) +![img.png](../assets/Suggestion.png) 7. If you finished with the file you can check ``Viewed`` in the top right corner and the file collapses -![img.png](../00_assets/Comment_viewed.png) +![img.png](../assets/Comment_viewed.png) 8. To finish your review click ``Review Changes`` -![img.png](../00_assets/Review_changes.png) +![img.png](../assets/Review_changes.png) 9. Type a comment summarizing your review 10. Select the type of review you like to leave: 11. Comment - General feedback without approval @@ -90,8 +90,8 @@ If the reviewer not only left comments but also made specific suggestions on cod 2. Navigate to the first suggested change 3. If you want to commit that change in a single commit, click ``Commit suggestion`` 4. If you want to put more changes together to a single commit, click ``Add suggestion to batch`` -![img.png](../00_assets/Commit_suggestion.png) -5. In the commit message field, type a short and meaningful commit message according to the [commit rules](./03_commit.md) +![img.png](../assets/Commit_suggestion.png) +5. In the commit message field, type a short and meaningful commit message according to the [commit rules](./commit.md) 6. Click ``Commit changes`` ### Re-requesting a review @@ -102,7 +102,7 @@ If you made substantial changes to your pull request and want to a fresh review If a comment of a review was resolved by either, a new commit or a discussion between the reviewer and the team that created the pull request, the conversation can be marked as resolved by clicking ``Resolve conversation`` in the ``Conversation`` or ``Files Changed`` tab of the pull request on GitHub. If a new commit took place it is encouraged to comment the commit SHA to have a connection between comment and resolving commit -![img.png](../00_assets/Resolve_conversation.png) +![img.png](../assets/Resolve_conversation.png) --- diff --git a/doc/02_development/templates/template_class.py b/doc/development/templates/template_class.py similarity index 100% rename from doc/02_development/templates/template_class.py rename to doc/development/templates/template_class.py diff --git a/doc/02_development/templates/template_class_no_comments.py b/doc/development/templates/template_class_no_comments.py similarity index 100% rename from doc/02_development/templates/template_class_no_comments.py rename to doc/development/templates/template_class_no_comments.py diff --git a/doc/02_development/templates/template_component_readme.md b/doc/development/templates/template_component_readme.md similarity index 100% rename from doc/02_development/templates/template_component_readme.md rename to doc/development/templates/template_component_readme.md diff --git a/doc/02_development/templates/template_wiki_page.md b/doc/development/templates/template_wiki_page.md similarity index 100% rename from doc/02_development/templates/template_wiki_page.md rename to doc/development/templates/template_wiki_page.md diff --git a/doc/02_development/templates/template_wiki_page_empty.md b/doc/development/templates/template_wiki_page_empty.md similarity index 100% rename from doc/02_development/templates/template_wiki_page_empty.md rename to doc/development/templates/template_wiki_page_empty.md diff --git a/doc/04_examples/dvc_example/.gitignore b/doc/examples/dvc_example/.gitignore similarity index 100% rename from doc/04_examples/dvc_example/.gitignore rename to doc/examples/dvc_example/.gitignore diff --git a/doc/04_examples/dvc_example/dataset.dvc b/doc/examples/dvc_example/dataset.dvc similarity index 100% rename from doc/04_examples/dvc_example/dataset.dvc rename to doc/examples/dvc_example/dataset.dvc diff --git a/doc/04_examples/gps_example/gps_signal_example.md b/doc/examples/gps_example/gps_signal_example.md similarity index 86% rename from doc/04_examples/gps_example/gps_signal_example.md rename to doc/examples/gps_example/gps_signal_example.md index 5314652c..92069b4f 100644 --- a/doc/04_examples/gps_example/gps_signal_example.md +++ b/doc/examples/gps_example/gps_signal_example.md @@ -2,7 +2,7 @@ **Summary:** This page explains how the GPS sensor is handled including a short example on how to use it. -**The Filter that's currently in use: [Kalman Filter](../../06_perception/08_kalman_filter.md)** +**The Filter that's currently in use: [Kalman Filter](../../perception/kalman_filter.md)** --- @@ -34,7 +34,7 @@ While latitude and longitude are measured in degrees, altitude is measured in me ## Filters for the sensor data As with all sensors provided by Carla, the GPS sensor output contains artificial noise. -![Unfiltered GPS signal](../../00_assets/filter_img/avg_1_w_1_000.png) +![Unfiltered GPS signal](../../assets/filter_img/avg_1_w_1_000.png) Right now there are multiple types of filters implemented. ### Intuitive filter @@ -49,11 +49,11 @@ parameters. The following graphs were taken while the car was stationary, the time on the bottom is therefore irrelevant. Shown is the position translated to a local coordinate system, the transformation will be discussed later. -![GPS signal (m=1, w=0,5)](../../00_assets/filter_img/avg_1_w_0_500.png) +![GPS signal (m=1, w=0,5)](../../assets/filter_img/avg_1_w_0_500.png) Using $w = 0.5$ clearly reduces the magnitude of the noise, however such a small value reduces the responsiveness of the output signal. -![GPS signal (m=1, w=0,5)](../../00_assets/filter_img/avg_10_w_1_000.png) +![GPS signal (m=1, w=0,5)](../../assets/filter_img/avg_10_w_1_000.png) Using a large number of data points ( $m = 10$ ) also improves the magnitude of the noise. The main drawback here is the reduced frequency of the output signal, as the frequency of the output signal is $\frac{1}{m}$ that of the input signal. @@ -61,7 +61,7 @@ This can be avoided through the use of a rolling average where for every output the last $m$ inputs are taken into account. Combining these two parameters can lead to further improve the result. -![GPS signal (m=1, w=0,5)](../../00_assets/filter_img/avg_20_w_0_750.png) +![GPS signal (m=1, w=0,5)](../../assets/filter_img/avg_20_w_0_750.png) The output signals frequency has now been reduced to 1Hz compared to the original 20Hz frequency, with the weight now being set to $w = 0.75$ @@ -76,7 +76,7 @@ whenever a new signal is received. Once new data is received the matrix is rotated by one position and the oldest measurement is overwritten. The output is equal to the average of all $n$ vectors. -![Rolling average filter (n=20)](../../00_assets/filter_img/rolling_avg_20.png) +![Rolling average filter (n=20)](../../assets/filter_img/rolling_avg_20.png) More arguments smooth out the gps signal, however the also add sluggishness to the output. The number of arguments taken into account can be adjusted using the @@ -84,21 +84,21 @@ The number of arguments taken into account can be adjusted using the This was the method ultimately chosen with $n=10$, leading to the following gps signal. -![Final gps signal (n=10)](../../00_assets/filter_img/rolling_avg_10.png) +![Final gps signal (n=10)](../../assets/filter_img/rolling_avg_10.png) ### Kalman Filter -A little more complex, but quicker reacting filter is the [Kalman Filter](../../06_perception/08_kalman_filter.md). +A little more complex, but quicker reacting filter is the [Kalman Filter](../../perception/kalman_filter.md). It is heavily dependent on which system model you use and how you tune its parameters. When done correctly it reduces the GPS noise greatly without adding any delay to the output such as the filters above do. -![MAE Boxed Graph of Location Error with respect to ideal Location](../../../doc/00_assets/perception/data_26_MAE_Boxed.png) +![MAE Boxed Graph of Location Error with respect to ideal Location](../../../doc/assets/perception/data_26_MAE_Boxed.png) In the upper graph a smaller box indicates less noise. Also the lower values are, the less deviation from the ideal position we have. This is the graph that was used for tuning the kalman parameters: -![MSE Boxed Graph of Location Error with respect to ideal Location](../../../doc/00_assets/perception/data_26_MSE_Boxed.png) +![MSE Boxed Graph of Location Error with respect to ideal Location](../../../doc/assets/perception/data_26_MSE_Boxed.png) It's depciting the MSE (mean squared errors) for the error distace to the ideal position. As you can see the filtered Positions are still noisy, but way closer to the ideal position. In comparison, the running average filter is not as noisy, but constantly wrong by about 1 meter, because it is time delayed. diff --git a/doc/01_general/Readme.md b/doc/general/Readme.md similarity index 50% rename from doc/01_general/Readme.md rename to doc/general/Readme.md index 313b5e96..81105f81 100644 --- a/doc/01_general/Readme.md +++ b/doc/general/Readme.md @@ -2,5 +2,5 @@ This Folder contains instruction how to execute the project and what it does. -1. [Installation](./02_installation.md) -2. [Current architecture of the agent](./04_architecture.md) +1. [Installation](./installation.md) +2. [Current architecture of the agent](./architecture.md) diff --git a/doc/01_general/04_architecture.md b/doc/general/architecture.md similarity index 93% rename from doc/01_general/04_architecture.md rename to doc/general/architecture.md index c1a9de98..6f95c996 100644 --- a/doc/01_general/04_architecture.md +++ b/doc/general/architecture.md @@ -47,7 +47,7 @@ found [here](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_sens The msgs necessary to control the vehicle via the Carla bridge can be found [here](https://carla.readthedocs.io/en/0.9.8/ros_msgs/#CarlaEgoVehicleControlmsg) -![Architecture overview](../00_assets/overview.jpg) +![Architecture overview](../assets/overview.jpg) The miro-board can be found [here](https://miro.com/welcomeonboard/a1F0d1dya2FneWNtbVk4cTBDU1NiN3RiZUIxdGhHNzJBdk5aS3N4VmdBM0R5c2Z1VXZIUUN4SkkwNHpuWlk2ZXwzNDU4NzY0NTMwNjYwNzAyODIzfDI=?share_link_id=785020837509). ## Perception @@ -55,8 +55,8 @@ The miro-board can be found [here](https://miro.com/welcomeonboard/a1F0d1dya2Fne The perception is responsible for the efficient conversion of raw sensor and map data into a useful environment representation that can be used by the [Planning](#Planning) for further processing. -Further information regarding the perception can be found [here](../06_perception/Readme.md). -Research for the perception can be found [here](../03_research/02_perception/Readme.md). +Further information regarding the perception can be found [here](../perception/Readme.md). +Research for the perception can be found [here](../research/perception/Readme.md). ### Obstacle Detection and Classification @@ -120,10 +120,10 @@ The planning uses the data from the [Perception](#Perception) to find a path on its destination. It also detects situations and reacts accordingly in traffic. It publishes signals such as a trajecotry or a target speed to acting. -Further information regarding the planning can be found [here](../07_planning/README.md). -Research for the planning can be found [here](../03_research/03_planning/Readme.md). +Further information regarding the planning can be found [here](../planning/README.md). +Research for the planning can be found [here](../research/planning/Readme.md). -### [Global Planning](../07_planning/Global_Planner.md) +### [Global Planning](../planning/Global_Planner.md) Uses information from the map and the path specified by CARLA to find a first concrete path to the next intermediate point. @@ -138,7 +138,7 @@ Publishes: - ```provisional_path``` ([nav_msgs/Path Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Path.html)) -### [Decision Making](../07_planning/Behavior_tree.md) +### [Decision Making](../planning/Behavior_tree.md) Decides which speed is the right one to pass through a certain situation and also checks if an overtake is necessary. @@ -157,7 +157,7 @@ Publishes: - ```curr_behavior``` ([std_msgs/String](https://docs.ros.org/en/api/std_msgs/html/msg/String.html)) -### [Local Planning](../07_planning/Local_Planning.md) +### [Local Planning](../planning/Local_Planning.md) It consists of three components: @@ -165,7 +165,7 @@ It consists of three components: - ACC: Generates a new speed based on a possible collision recieved from Collision Check and speedlimits recieved from [Global Planner](#global-planning) - Motion Planning: Decides the target speed and modifies trajectory if signal recieved from [Decision Making](#decision-making) -#### [Collision Check](../07_planning//Collision_Check.md) +#### [Collision Check](../planning//Collision_Check.md) Subscriptions: @@ -182,7 +182,7 @@ Publishes: - ```current_wp``` ([std_msgs/Float32](https://docs.ros.org/en/api/std_msgs/html/msg/Float32.html)) - ```speed_limit``` ([std_msgs/Float32](https://docs.ros.org/en/api/std_msgs/html/msg/Float32.html)) -#### [ACC](../07_planning/ACC.md) +#### [ACC](../planning/ACC.md) Subscriptions: @@ -195,7 +195,7 @@ Publishes: - ```collision``` ([std_msgs/Float32MultiArray](https://docs.ros.org/en/api/std_msgs/html/msg/Float32MultiArray.html)) - ```oncoming``` ([std_msgs/Float32](https://docs.ros.org/en/api/std_msgs/html/msg/Float32.html)) -#### [Motion Planning](../07_planning/motion_planning.md) +#### [Motion Planning](../planning/motion_planning.md) Subscriptions: @@ -225,9 +225,9 @@ Publishes: The job of this component is to take the planned trajectory and target-velocities from the [Planning](#Planning) component and convert them into steering and throttle/brake controls for the CARLA-vehicle. -All information regarding research done about acting can be found [here](../03_research/01_acting/Readme.md). +All information regarding research done about acting can be found [here](../research/acting/Readme.md). -Indepth information about the currently implemented acting Components can be found [HERE](../05_acting/Readme.md)! +Indepth information about the currently implemented acting Components can be found [HERE](../acting/Readme.md)! ### Path following with Steering Controllers @@ -245,7 +245,7 @@ Publishes: - ```steering_angle``` for ```vehicle_control_cmd``` ([CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/en/0.9.8/ros_msgs/#CarlaEgoVehicleControlmsg)) -For further indepth information about the currently implemented Steering Controllers click [HERE](../05_acting/03_steering_controllers.md) +For further indepth information about the currently implemented Steering Controllers click [HERE](../acting/steering_controllers.md) ### Velocity control @@ -263,7 +263,7 @@ Publishes: - ```reverse``` for ```vehicle_control_cmd``` ([CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/en/0.9.8/ros_msgs/#CarlaEgoVehicleControlmsg)) -For further indepth information about the currently implemented Velocity Controller click [HERE](../05_acting/02_velocity_controller.md) +For further indepth information about the currently implemented Velocity Controller click [HERE](../acting/velocity_controller.md) ### Vehicle controller @@ -282,7 +282,7 @@ Publishes: - ```vehicle_control_cmd``` ([CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/en/0.9.8/ros_msgs/#CarlaEgoVehicleControlmsg)) -For further indepth information about the currently implemented Vehicle Controller click [HERE](../05_acting/04_vehicle_controller.md) +For further indepth information about the currently implemented Vehicle Controller click [HERE](../acting/vehicle_controller.md) ## Visualization diff --git a/doc/01_general/02_installation.md b/doc/general/installation.md similarity index 97% rename from doc/01_general/02_installation.md rename to doc/general/installation.md index a0a1d332..c3620c60 100644 --- a/doc/01_general/02_installation.md +++ b/doc/general/installation.md @@ -69,7 +69,7 @@ sudo systemctl restart docker Cannot find a compatible Vulkan Device. Try updating your video driver to a more recent version and make sure your video card supports Vulkan. -![Vulkan device not available](../00_assets/vulkan_device_not_available.png) +![Vulkan device not available](../assets/vulkan_device_not_available.png) Verify the issue with the following command: diff --git a/doc/perception/Readme.md b/doc/perception/Readme.md new file mode 100644 index 00000000..9b747565 --- /dev/null +++ b/doc/perception/Readme.md @@ -0,0 +1,22 @@ +# Documentation of perception component + +This folder contains further documentation of the perception components. + +1. [Vision Node](./vision_node.md) + - The Visison Node provides an adaptive interface that is able to perform object-detection and/or image-segmentation on multiple cameras at the same time. +2. [Position Heading Filter Debug Node](./position_heading_filter_debug_node.md) +3. [Kalman Filter](./kalman_filter.md) +4. [Position Heading Publisher Node](./position_heading_publisher_node.md) +5. [Distance to Objects](./distance_to_objects.md) +6. [Traffic Light Detection](./traffic_light_detection.md) +7. [Coordinate Transformation (helper functions)](./coordinate_transformation.md) +8. [Dataset Generator](./dataset_generator.md) +9. [Dataset Structure](./dataset_structure.md) +10. [Lidar Distance Utility](./lidar_distance_utility.md) + 1. not used since paf22 +11. [Efficient PS](./efficientps.md) + 1. not used scince paf22 and never successfully tested + +## Experiments + +- The overview of performance evaluations is located in the [experiments](./experiments/README.md) folder. diff --git a/doc/06_perception/00_coordinate_transformation.md b/doc/perception/coordinate_transformation.md similarity index 92% rename from doc/06_perception/00_coordinate_transformation.md rename to doc/perception/coordinate_transformation.md index 46eea5ca..a6180ec7 100644 --- a/doc/06_perception/00_coordinate_transformation.md +++ b/doc/perception/coordinate_transformation.md @@ -14,12 +14,12 @@ Robert Fischer -- [Coordinate Transformation](#coordonate-transformation) +- [Coordinate Transformation](#coordinate-transformation) - [Author](#author) - [Date](#date) - [Usage](#usage) - [Methods](#methods) - - [quat_to_heading(quaternion)](#quat_to_headingquaternion) + - [quat\_to\_heading(quaternion)](#quat_to_headingquaternion) @@ -95,7 +95,7 @@ $$ So we end up with a vector that's rotated into the x-y plane with the new x and y coordinates being `a` and `d`: -![quat_to_angle](../../doc/00_assets/perception/quat_to_angle.png) +![quat_to_angle](../../doc/assets/perception/quat_to_angle.png) Now all we need to do is calculate the angle $\theta$ around the z-axis which this vector creates between the x-axis and itself using the `atan` function: @@ -112,7 +112,7 @@ $$heading = \theta$$ def quat_to_heading(quaternion): """ Converts a quaternion to a heading of the car in radians - (see ../../doc/06_perception/00_coordinate_transformation.md) + (see ../../doc/perception/coordinate_transformation.md) :param quaternion: quaternion of the car as a list [q.x, q.y, q.z, q.w] where q is the quaternion :return: heading of the car in radians (float) diff --git a/doc/06_perception/01_dataset_generator.md b/doc/perception/dataset_generator.md similarity index 100% rename from doc/06_perception/01_dataset_generator.md rename to doc/perception/dataset_generator.md diff --git a/doc/06_perception/02_dataset_structure.md b/doc/perception/dataset_structure.md similarity index 97% rename from doc/06_perception/02_dataset_structure.md rename to doc/perception/dataset_structure.md index aecd0a40..1ba48dbc 100644 --- a/doc/06_perception/02_dataset_structure.md +++ b/doc/perception/dataset_structure.md @@ -26,7 +26,7 @@ Marco Riedenauer ## Converting the dataset -After creating the dataset with the [Dataset Generator](01_dataset_generator.md) or creating a dataset on your own, +After creating the dataset with the [Dataset Generator](dataset_generator.md) or creating a dataset on your own, execute the [Dataset Converter](../../code/perception/src/dataset_converter.py) to ensure that your dataset has the following structure: diff --git a/doc/06_perception/10_distance_to_objects.md b/doc/perception/distance_to_objects.md similarity index 92% rename from doc/06_perception/10_distance_to_objects.md rename to doc/perception/distance_to_objects.md index f49fb860..684ac59f 100644 --- a/doc/06_perception/10_distance_to_objects.md +++ b/doc/perception/distance_to_objects.md @@ -22,7 +22,7 @@ I found ways online, that seemed to solve this issue though. ### Concept -![3d_2d_porjection](../00_assets/3d_2d_projection.png) +![3d_2d_porjection](../assets/3d_2d_projection.png) The goal is to calculate the projection of point P and find its Pixl-Coordinates (u,v) on the Image-Plain. To do this you need a couple of thins: @@ -39,7 +39,7 @@ To do this you need a couple of thins: The formula for this projection proposed by the literature looks like this: -![3d_2d_formula](../00_assets/3d_2d_formula.png) +![3d_2d_formula](../assets/3d_2d_formula.png) To get the camera-intrinsic matrix we need the width, height and fov of the image produced by the camera. Luckily we cn easly get these values from the sensor configuration in (agent.py) @@ -65,7 +65,7 @@ To reconstruct the depth image, we simply implement the above formulas using num The resulting Image takes the distance in meters as values for its pixels. It therefore is a grayscale image. -![Grayscale Depth Image](../00_assets/2_15_layover.png) +![Grayscale Depth Image](../assets/2_15_layover.png) In the next step we want to get the distance for every bounding box the object-detection found. @@ -92,9 +92,9 @@ If there is no distance found in the depth image, we will return infinity for th This topic came to our attention, as we realised that the LIDAR was flickering, as you can see in the following image series. -![Grayscale Depth Image](../00_assets/2_layover.png) -![Grayscale Depth Image](../00_assets/3_layover.png) -![Grayscale Depth Image](../00_assets/4_layover.png) +![Grayscale Depth Image](../assets/2_layover.png) +![Grayscale Depth Image](../assets/3_layover.png) +![Grayscale Depth Image](../assets/4_layover.png) These are the Grayscale-Depth Images reconstructed within 600 milliseconds. diff --git a/doc/06_perception/04_efficientps.md b/doc/perception/efficientps.md similarity index 94% rename from doc/06_perception/04_efficientps.md rename to doc/perception/efficientps.md index 92ce43a4..7664cd6e 100644 --- a/doc/06_perception/04_efficientps.md +++ b/doc/perception/efficientps.md @@ -28,11 +28,11 @@ Marco Riedenauer ## Model Overview EfficientPS is a neural network designed for panoptic segmentation -(see [Panoptic Segmentation](../03_research/02_perception/03_first_implementation_plan.md#panoptic-segmentation)). +(see [Panoptic Segmentation](../research/perception/first_implementation_plan.md#panoptic-segmentation)). The model itself consists of 4 parts as can be seen in the following figure. The displayed shapes are incorrect in our case, since we used half the image size. -![EfficientPS Structure](../00_assets/efficientps_structure.png) +![EfficientPS Structure](../assets/efficientps_structure.png) [Source](https://arxiv.org/pdf/2004.02307.pdf) - Feature Extraction: diff --git a/doc/06_perception/experiments/README.md b/doc/perception/experiments/README.md similarity index 100% rename from doc/06_perception/experiments/README.md rename to doc/perception/experiments/README.md diff --git a/doc/06_perception/experiments/lanenet_evaluation/README.md b/doc/perception/experiments/lanenet_evaluation/README.md similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/README.md rename to doc/perception/experiments/lanenet_evaluation/README.md diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1600_lanes.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1600_lanes.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1600_lanes.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1600_lanes.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1600_lanes_mask.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1600_lanes_mask.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1600_lanes_mask.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1600_lanes_mask.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1619_lanes.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1619_lanes.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1619_lanes.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1619_lanes.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1619_lanes_mask.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1619_lanes_mask.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1619_lanes_mask.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1619_lanes_mask.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1660_lanes.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1660_lanes.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1660_lanes.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1660_lanes.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1660_lanes_mask.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1660_lanes_mask.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1660_lanes_mask.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1660_lanes_mask.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1663_lanes.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1663_lanes.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1663_lanes.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1663_lanes.jpg diff --git a/doc/06_perception/experiments/lanenet_evaluation/assets/1663_lanes_mask.jpg b/doc/perception/experiments/lanenet_evaluation/assets/1663_lanes_mask.jpg similarity index 100% rename from doc/06_perception/experiments/lanenet_evaluation/assets/1663_lanes_mask.jpg rename to doc/perception/experiments/lanenet_evaluation/assets/1663_lanes_mask.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/README.md b/doc/perception/experiments/object-detection-model_evaluation/README.md similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/README.md rename to doc/perception/experiments/object-detection-model_evaluation/README.md diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_PT_fasterrcnn_resnet50_fpn_v2.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_PT_fasterrcnn_resnet50_fpn_v2.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_PT_fasterrcnn_resnet50_fpn_v2.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_PT_fasterrcnn_resnet50_fpn_v2.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_TF_faster-rcnn.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_TF_faster-rcnn.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_TF_faster-rcnn.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_TF_faster-rcnn.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_nas_l.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_nas_l.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_nas_l.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_nas_l.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_rtdetr_x.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_rtdetr_x.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_rtdetr_x.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolo_rtdetr_x.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg b/doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg rename to doc/perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/globals.py b/doc/perception/experiments/object-detection-model_evaluation/globals.py similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/globals.py rename to doc/perception/experiments/object-detection-model_evaluation/globals.py diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/pt.py b/doc/perception/experiments/object-detection-model_evaluation/pt.py similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/pt.py rename to doc/perception/experiments/object-detection-model_evaluation/pt.py diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/pylot.py b/doc/perception/experiments/object-detection-model_evaluation/pylot.py similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/pylot.py rename to doc/perception/experiments/object-detection-model_evaluation/pylot.py diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/requirements.txt b/doc/perception/experiments/object-detection-model_evaluation/requirements.txt similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/requirements.txt rename to doc/perception/experiments/object-detection-model_evaluation/requirements.txt diff --git a/doc/06_perception/experiments/object-detection-model_evaluation/yolo.py b/doc/perception/experiments/object-detection-model_evaluation/yolo.py similarity index 100% rename from doc/06_perception/experiments/object-detection-model_evaluation/yolo.py rename to doc/perception/experiments/object-detection-model_evaluation/yolo.py diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/README.md b/doc/perception/experiments/traffic-light-detection_evaluation/README.md similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/README.md rename to doc/perception/experiments/traffic-light-detection_evaluation/README.md diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/back_1.png b/doc/perception/experiments/traffic-light-detection_evaluation/assets/back_1.png similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/back_1.png rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/back_1.png diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/back_14.jpg b/doc/perception/experiments/traffic-light-detection_evaluation/assets/back_14.jpg similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/back_14.jpg rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/back_14.jpg diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/green_22.jpg b/doc/perception/experiments/traffic-light-detection_evaluation/assets/green_22.jpg similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/green_22.jpg rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/green_22.jpg diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/green_4.png b/doc/perception/experiments/traffic-light-detection_evaluation/assets/green_4.png similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/green_4.png rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/green_4.png diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/red_10.png b/doc/perception/experiments/traffic-light-detection_evaluation/assets/red_10.png similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/red_10.png rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/red_10.png diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/red_20.png b/doc/perception/experiments/traffic-light-detection_evaluation/assets/red_20.png similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/red_20.png rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/red_20.png diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/yellow_1.png b/doc/perception/experiments/traffic-light-detection_evaluation/assets/yellow_1.png similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/yellow_1.png rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/yellow_1.png diff --git a/doc/06_perception/experiments/traffic-light-detection_evaluation/assets/yellow_18.jpg b/doc/perception/experiments/traffic-light-detection_evaluation/assets/yellow_18.jpg similarity index 100% rename from doc/06_perception/experiments/traffic-light-detection_evaluation/assets/yellow_18.jpg rename to doc/perception/experiments/traffic-light-detection_evaluation/assets/yellow_18.jpg diff --git a/doc/06_perception/08_kalman_filter.md b/doc/perception/kalman_filter.md similarity index 97% rename from doc/06_perception/08_kalman_filter.md rename to doc/perception/kalman_filter.md index d79833f6..df59b3f7 100644 --- a/doc/06_perception/08_kalman_filter.md +++ b/doc/perception/kalman_filter.md @@ -48,7 +48,7 @@ to **"Kalman"**, depending on if you want to use the Filter for both the Positio In the case of using the Filter for both, it should look like this: -![Kalman Filter for both parameters](../../doc/00_assets/perception/kalman_installation_guide.png) +![Kalman Filter for both parameters](../../doc/assets/perception/kalman_installation_guide.png) No further installation needed. @@ -250,9 +250,9 @@ Smaller boxes mean the data is closer together and less spread. The Kalman Filter was tuned to create the smallest MSE possible, which gives more weight to larger errors which we want to minimise. The MAE on the other hand shows a 1:1 representation in terms of distance from the ideal to the predicted location. -![MSE Boxed Graph of Location Error with respect to ideal Location](../../doc/00_assets/perception/data_26_MSE_Boxed.png) +![MSE Boxed Graph of Location Error with respect to ideal Location](../../doc/assets/perception/data_26_MSE_Boxed.png) -![MAE Boxed Graph of Location Error with respect to ideal Location](../../doc/00_assets/perception/data_26_MAE_Boxed.png) +![MAE Boxed Graph of Location Error with respect to ideal Location](../../doc/assets/perception/data_26_MAE_Boxed.png) As you see this data you might think the unfiltered data seems to be just as good if not even better than the previous rolling average filter (RAF). diff --git a/doc/06_perception/03_lidar_distance_utility.md b/doc/perception/lidar_distance_utility.md similarity index 98% rename from doc/06_perception/03_lidar_distance_utility.md rename to doc/perception/lidar_distance_utility.md index f81d2904..fbcf2b7f 100644 --- a/doc/06_perception/03_lidar_distance_utility.md +++ b/doc/perception/lidar_distance_utility.md @@ -54,7 +54,7 @@ starting from 20cm above the ground you have to set min_z = -1.5. The meaning of the x and y values is described by the following image: -![lidar filter](../00_assets/lidar_filter.png) +![lidar filter](../assets/lidar_filter.png) ### Example diff --git a/doc/06_perception/07_position_heading_filter_debug_node.md b/doc/perception/position_heading_filter_debug_node.md similarity index 92% rename from doc/06_perception/07_position_heading_filter_debug_node.md rename to doc/perception/position_heading_filter_debug_node.md index 34f7ec2c..09b10551 100644 --- a/doc/06_perception/07_position_heading_filter_debug_node.md +++ b/doc/perception/position_heading_filter_debug_node.md @@ -3,7 +3,7 @@ **Summary:** [position_heading_filter_debug_node.py](../../code/perception/src/position_heading_filter_debug_node.py): The position_heading_filter_debug_node node is responsible for collecting sensor data from the IMU and GNSS and process the data in such a way, that it shows the errors between the real is-state and the measured state. -The data can be looked at in rqt_plots or (better) in mathplotlib plots pre-made by the [viz.py](../../code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py) file. +The data can be looked at in rqt_plots or (better) in mathplotlib plots pre-made by the [viz.py](../../code/perception/src/experiments/Position_Heading_Datasets/viz.py) file. !!THIS NODE USES THE CARLA API!! @@ -48,12 +48,12 @@ If you are trying to implement a new position/ heading filter and want to tune i 1. Create a new Filter Node class (if not already done) AND publish a paf/hero/filter_name_pos AND/OR filter_name_heading 2. Change the topic of the test_filter_subscribers to your topic (currently kalman) -![Subscriber Change](/doc/00_assets/perception/sensor_debug_change.png) +![Subscriber Change](/doc/assets/perception/sensor_debug_change.png) If you want to save the debug in csv files for better debugging you should uncomment that part in the main loop of the file: -![Save Files as CSV](/doc/00_assets/perception/sensor_debug_data_saving.png) +![Save Files as CSV](/doc/assets/perception/sensor_debug_data_saving.png) --- @@ -61,18 +61,18 @@ that part in the main loop of the file: Running the node provides you with ideal position and heading topics that can be used to debug your sensor filters by giving you ideal values you should aim for. -It also provides you with helpful data saving methods for plotting your data (with regards to ideal values) by using the [viz.py](../../code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py) file, which is a lot more customizable and nicer to use than rqt plots. +It also provides you with helpful data saving methods for plotting your data (with regards to ideal values) by using the [viz.py](../../code/perception/src/experiments/Position_Heading_Datasets/viz.py) file, which is a lot more customizable and nicer to use than rqt plots. If you want to know more about how to use that, you can go on to [Visualization](#visualization) An Example of rqt plot Output can be seen here: -![Distance from current_pos to ideal_gps_pos (blue) and to carla_pos (red)](../00_assets/gnss_ohne_rolling_average.png) +![Distance from current_pos to ideal_gps_pos (blue) and to carla_pos (red)](../assets/gnss_ohne_rolling_average.png) The file is using a main loop with a fixed refresh rate, that can be changed in the perception launch file. In this loop it does the following things: 1. Refresh the Ideal Position and Heading (using the Carla API) 2. Update & Publish the Position & Heading Debug Values (see [Outputs](#outputs) for more info) -3. Save the debug data in CSV files in the corresponding folder in code/perception/00_Experiments +3. Save the debug data in CSV files in the corresponding folder in code/perception/experiments (can be outcommented if only working with rqt graphs is enough for you) @@ -180,11 +180,11 @@ It can be used to debug X data, Y data and Heading (h) data. To be able to save data in csv files you just need to uncomment the saving methods in the main loop as stated in the [Getting Started](#getting-started) chapter. -To use the [viz.py](../../code/perception/src/00_Experiments/Position_Heading_Datasets/viz.py) file you will have to: +To use the [viz.py](../../code/perception/src/experiments/Position_Heading_Datasets/viz.py) file you will have to: -1. Configure the main method to your likings inside the viz.py: ![picture](/doc/00_assets/perception/sensor_debug_viz_config.png) +1. Configure the main method to your likings inside the viz.py: ![picture](/doc/assets/perception/sensor_debug_viz_config.png) 2. Open up an attached shell -3. Navigate to the code/perception/src/00_Experiments/Position_Heading folder using ```cd``` +3. Navigate to the code/perception/src/experiments/Position_Heading folder using ```cd``` 4. run the viz.py using ```python viz.py``` With this file you can plot: diff --git a/doc/06_perception/09_position_heading_publisher_node.md b/doc/perception/position_heading_publisher_node.md similarity index 89% rename from doc/06_perception/09_position_heading_publisher_node.md rename to doc/perception/position_heading_publisher_node.md index 30064a4f..5ff5daff 100644 --- a/doc/06_perception/09_position_heading_publisher_node.md +++ b/doc/perception/position_heading_publisher_node.md @@ -53,7 +53,7 @@ You can use filters for the heading and for the location independently using the In case of using the Kalman Filter for both, it should look like this: -![Kalman Filter for both parameters](../../doc/00_assets/perception/kalman_installation_guide.png) +![Kalman Filter for both parameters](../../doc/assets/perception/kalman_installation_guide.png) _If you want to create a new Filter in the future, I suggest keeping this template intact. See next Chapter_ 😊 @@ -68,19 +68,19 @@ For example: _Implementing a new non-linear Kalman Filter could look like this_: - _perception.launch file_: -![Non Linear Kalman Filter Example](../../doc/00_assets/perception/non_linear_kalman_example.png) +![Non Linear Kalman Filter Example](../../doc/assets/perception/non_linear_kalman_example.png) - _Subscribers_: -![Non Linear Kalman Filter Example 2](../../doc/00_assets/perception/modular_subscriber_example.png) +![Non Linear Kalman Filter Example 2](../../doc/assets/perception/modular_subscriber_example.png) - _Heading Methods_: -![Non Linear Kalman Filter Example](../../doc/00_assets/perception/adding_new_position_methods.png) +![Non Linear Kalman Filter Example](../../doc/assets/perception/adding_new_position_methods.png) - _Position Methods_: -![Non Linear Kalman Filter Example](../../doc/00_assets/perception/new_heading_pub_example.png) +![Non Linear Kalman Filter Example](../../doc/assets/perception/new_heading_pub_example.png) As you can see, this file is merely for gathering and forwarding the filter values in the form of currentPos and currentHeading. @@ -100,7 +100,7 @@ If `none` is selected for the Filter, it publishes the data as the `current_pos` This method is called when new heading data is received. It handles all necessary updates and publishes the heading as a double value, indicating the cars rotation around the z-axis in rad. -For more info about how the heading is calculated see [here](./00_coordinate_transformation.md). +For more info about how the heading is calculated see [here](./coordinate_transformation.md). ### Position Functions diff --git a/doc/06_perception/11_traffic_light_detection.md b/doc/perception/traffic_light_detection.md similarity index 100% rename from doc/06_perception/11_traffic_light_detection.md rename to doc/perception/traffic_light_detection.md diff --git a/doc/06_perception/06_vision_node.md b/doc/perception/vision_node.md similarity index 75% rename from doc/06_perception/06_vision_node.md rename to doc/perception/vision_node.md index 951350ac..3838c998 100644 --- a/doc/06_perception/06_vision_node.md +++ b/doc/perception/vision_node.md @@ -76,43 +76,43 @@ The object-detection can be run both ultralytics and pyTorch models. Depending o The object-detection can publish images to RViz under their specified camera angle and topic. -![Object-Detection](../06_perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg) +![Object-Detection](../perception/experiments/object-detection-model_evaluation/asset-copies/1619_yolov8x_seg.jpg) -Please refer to the [model evaluation](../06_perception/experiments/object-detection-model_evaluation/README.md) for more detailed information about the performance of each model. +Please refer to the [model evaluation](../perception/experiments/object-detection-model_evaluation/README.md) for more detailed information about the performance of each model. **Center Camera** -![Center Camera](../00_assets/Front_Detection.png) +![Center Camera](../assets/Front_Detection.png) **Back Camera** -![Back Camera](../00_assets/Back_Detection.png) +![Back Camera](../assets/Back_Detection.png) **Left Camera** -![Left Camera](../00_assets/Left_Detection.png) +![Left Camera](../assets/Left_Detection.png) **Right Camera** -![Right Camera](../00_assets/Right_Detection.png) +![Right Camera](../assets/Right_Detection.png) ## 2. Distance-Calculation -The Vision-Node reveives depth-images from the [lidar distance node](10_distance_to_objects.md) for the specified camera angle. It can than find the min x and min abs y distance within each bounding box that has been predicted by a model. This feature is implemented only for utralytics models. +The Vision-Node reveives depth-images from the [lidar distance node](distance_to_objects.md) for the specified camera angle. It can than find the min x and min abs y distance within each bounding box that has been predicted by a model. This feature is implemented only for utralytics models. The depth images have the same dimension as the camera image and contain x, y and z coordinates of the lidar coordinates system in the three RGB-Channels. -![Depth Image](../00_assets/2_15_layover.png) +![Depth Image](../assets/2_15_layover.png) -Read more about the calculation of Depth Image [here](10_distance_to_objects.md) +Read more about the calculation of Depth Image [here](distance_to_objects.md) ## 3. Publishing of Outputs -In order to provide valuble information for the [planning](../07_planning/README.md), the Vision-Node collects a set of information for each object and publishes a list of objects on the "distance_of_objects" Topic. +In order to provide valuble information for the [planning](../planning/README.md), the Vision-Node collects a set of information for each object and publishes a list of objects on the "distance_of_objects" Topic. - Class_Index - Min_X - Min_Abs_Y When no Lidar-Points are found inside a bounding box, the distances will both be set to np.inf. -Check also [here](10_distance_to_objects.md) to learn more about this list. +Check also [here](distance_to_objects.md) to learn more about this list. In order to provide good visual feedback of what is calculated in the Vision-Node, each camera angle publishes images with bounding boxes and the corresponding distance values found for the object. -![Distance of objects](../00_assets/distance_visualization.png) +![Distance of objects](../assets/distance_visualization.png) diff --git a/doc/07_planning/ACC.md b/doc/planning/ACC.md similarity index 100% rename from doc/07_planning/ACC.md rename to doc/planning/ACC.md diff --git a/doc/07_planning/Behavior_tree.md b/doc/planning/Behavior_tree.md similarity index 96% rename from doc/07_planning/Behavior_tree.md rename to doc/planning/Behavior_tree.md index d3db2ca6..11dc3498 100644 --- a/doc/07_planning/Behavior_tree.md +++ b/doc/planning/Behavior_tree.md @@ -48,7 +48,7 @@ Julius Miller ## About -This Package implements a behaviour agent for our autonomous car using **Behaviour Trees**. It uses the [py_trees](./01_py_trees.md) Framework, that works well with ROS. +This Package implements a behaviour agent for our autonomous car using **Behaviour Trees**. It uses the [py_trees](./py_trees.md) Framework, that works well with ROS. For visualization at runtime you might want to also install this [rqt-Plugin](https://wiki.ros.org/rqt_py_trees). ## Our behaviour tree @@ -56,7 +56,7 @@ For visualization at runtime you might want to also install this [rqt-Plugin](ht The following section describes the behaviour tree we use for normal driving using all functionality provided by the agent. In the actual implementation this is part of a bigger tree, that handles things like writing topics to the blackboard, starting and finishing the decision tree. The following tree is a simplification. -![Simple Tree](../00_assets/planning/simple_final_tree.png) +![Simple Tree](../assets/planning/simple_final_tree.png) ### Behavior @@ -86,9 +86,9 @@ Represents a specific task/scenario which is handled by the decision tree. #### Legend -![BT Legend](../00_assets/legend_bt.png) +![BT Legend](../assets/legend_bt.png) -![BT Intersection](../00_assets/intersection.png) +![BT Intersection](../assets/intersection.png) If there is an intersection coming up, the agent executes the following sequence of behaviours: diff --git a/doc/07_planning/Collision_Check.md b/doc/planning/Collision_Check.md similarity index 100% rename from doc/07_planning/Collision_Check.md rename to doc/planning/Collision_Check.md diff --git a/doc/07_planning/Global_Planner.md b/doc/planning/Global_Planner.md similarity index 100% rename from doc/07_planning/Global_Planner.md rename to doc/planning/Global_Planner.md diff --git a/doc/07_planning/Local_Planning.md b/doc/planning/Local_Planning.md similarity index 93% rename from doc/07_planning/Local_Planning.md rename to doc/planning/Local_Planning.md index 87480cba..4624fd83 100644 --- a/doc/07_planning/Local_Planning.md +++ b/doc/planning/Local_Planning.md @@ -38,7 +38,7 @@ The Local Planning component is responsible for evaluating short term decisions The Local Planning in this project is divided in three components. Collision Check, Adaptive Cruise Control (ACC) and Motion Planning. The architecture can be seen below: -![Planning_architecture.png](../00_assets/planning/Planning_architecture.png) +![Planning_architecture.png](../assets/planning/Planning_architecture.png) The theoretical concepts of each Local Planning component are explained below. @@ -46,14 +46,14 @@ The theoretical concepts of each Local Planning component are explained below. The Collision Check is the backbone of the Local Planning. Its task is to detect collisions with objects published by the vision node. The workflow when new objects are recieved looks like this: -![collision_check.png](../00_assets/planning/collision_check.png) +![collision_check.png](../assets/planning/collision_check.png) ### Apply filters The following input is recieved by the perception: $[class, min⁡(𝑎𝑏𝑠(𝑦)), min⁡(𝑥)]$ in a $(nx3)$-matrix Filtering steps: -![vision_objects_filter_cc.png](../00_assets/planning/vision_objects_filter_cc.png) +![vision_objects_filter_cc.png](../assets/planning/vision_objects_filter_cc.png) We filter for the following traffic objects: Pedestrians, bicycles, bikes, cars, busses and trucks. To filter oncoming traffic the $y$-distance is used as a deviation from the cars's middle axis (+ left, - right). @@ -109,7 +109,7 @@ The Motion Planning is the central control of the Local Planning. Controlling th ### Cornering Speed -![Corner Speed - Full Trajectory.png](../00_assets/planning/plot_full_trajectory_1_degree.png) +![Corner Speed - Full Trajectory.png](../assets/planning/plot_full_trajectory_1_degree.png) The cornering speed gets calculated at the beginning of the scenario, when the full trajectory is received: @@ -123,7 +123,7 @@ Lane changes are special, because you can drive the with normal speed eventhough The target velocity is a combination of the acc speed, the behavior speed and the cornering speed. Almost everytime the minimal speed is choosen. Exceptions are overtaking and the parking maneuver. -![Scenario](../00_assets/planning/three_scenarios.png) +![Scenario](../assets/planning/three_scenarios.png) In the first scenario on the left side the green ego vehicle chooses the acc speed to not cause a collision with the red car. In the second scenario the car is waiting at the intersection and chooses the behavior speed (wait at intersection), while the acc would say speedlimit. @@ -131,7 +131,7 @@ In the last scenario the car chooses the cornering speed to smoothly perform a 9 ### Moving the trajectory -![Overtake](../00_assets/planning/Overtake_car_trajectory.png) +![Overtake](../assets/planning/Overtake_car_trajectory.png) The trajectory gets moved a fixed amount of meters to the left if an overtake is triggered. @@ -146,7 +146,7 @@ rotation_adjusted = Rotation.from_euler('z', self.current_heading + After generating our target roatation we generate a offset vector with the number of meters to move our points as x-value. Then we rotate this vector and add it to the desired waypoint (see red vector in figure below) -![Vector math](../00_assets/planning/vector_calculation.png) +![Vector math](../assets/planning/vector_calculation.png) ```python offset = np.array([offset_meters, 0, 0]) diff --git a/doc/07_planning/Preplanning.md b/doc/planning/Preplanning.md similarity index 96% rename from doc/07_planning/Preplanning.md rename to doc/planning/Preplanning.md index 4ed557b9..252e2ce9 100644 --- a/doc/07_planning/Preplanning.md +++ b/doc/planning/Preplanning.md @@ -42,7 +42,7 @@ No extra installation needed. The leaderboard provides target points and instructions. Every target point contains an appropriate instruction. -![img.png](../00_assets/road_option.png) +![img.png](../assets/road_option.png) We need to cover the following instructions for intersections: @@ -65,7 +65,7 @@ clipping of town 12. It visualizes the agent (red triangle) and the first target from the leaderboard. It also shows the final trajectory on the right side. The picture covers a "turn right" and a lane "change left". -![img.png](../00_assets/road_options_concept.png) +![img.png](../assets/road_options_concept.png) ## Road information @@ -85,7 +85,7 @@ only holds id values which have to be solved with the carla API. Also the name o That is why we would need to get the information for every traffic sign id from carla. This would crash with the leaderboard requirements. We are --not-- allowed to use ground truth information from the game engine. -![img.png](../00_assets/Road0_cutout.png) +![img.png](../assets/Road0_cutout.png) The picture shows the clipping of Road 0 from the leaderboard town 12. @@ -99,7 +99,7 @@ If the road would be part of a junction, there would be a id value greater than A junction manages colliding roads. Junctions only exist when roads intersect. -![img.png](../00_assets/junction.png) +![img.png](../assets/junction.png) The picture above shows an intersection of roads. All possible ways through this intersection have to be covered. The picture shows a clipping of town 12. @@ -107,7 +107,7 @@ The picture shows a clipping of town 12. To view a xodr file we used the following [viewer](https://odrviewer.io/). Very helpful tool to get a better understanding of an underlaying town and to debug the trajectory. -![img.png](../00_assets/intersection_2.png) +![img.png](../assets/intersection_2.png) The picture above shows an intersection. The agent is visualized with the red triangle and wants to drive through the intersection. He has three options, which are shown with the orange lines. The yellow point shows a target point @@ -121,12 +121,12 @@ follow to cross the intersection. Every road has a geometry information. This information is important to interpolate the road correctly. -![img.png](../00_assets/reference_xodr.png) +![img.png](../assets/reference_xodr.png) The picture shows the clipping of a road with a curve. The road contains "line" segments and "arc curvature" segments. We have to interpolate this segments in the provided order to reconstruct the reference line of a road. -![img.png](../00_assets/reference.png) +![img.png](../assets/reference.png) The picture above shows the reference line with its different segments. diff --git a/doc/07_planning/README.md b/doc/planning/README.md similarity index 92% rename from doc/07_planning/README.md rename to doc/planning/README.md index db33eda6..7123c208 100644 --- a/doc/07_planning/README.md +++ b/doc/planning/README.md @@ -34,7 +34,7 @@ After finishing that this node initiates the calculation of a trajectory based o from preplanning_trajectory.py. In the end the computed trajectory and prevailing speed limits are published to the other components of this project (acting, decision making,...). -![img.png](../00_assets/Global_Plan.png) +![img.png](../assets/Global_Plan.png) ### [Decision making](./Behavior_tree.md) @@ -42,11 +42,11 @@ The decision making collects most of the available information of the other comp the information. All possible traffic scenarios are covered in this component. The decision making uses a so called decision tree, which is easy to adapt and to expand. -![Simple Tree](../00_assets/planning/simple_final_tree.png) +![Simple Tree](../assets/planning/simple_final_tree.png) ### [Local Planning](./Local_Planning.md) The Local Planning component is responsible for evaluating short term decisions in the local environment of the ego vehicle. It containes components responsible for detecting collisions and reacting e. g. lowering speed. The local planning also executes behaviors e. g. changes the trajectory for an overtake. -![Overtake](../00_assets/planning/Overtake_car_trajectory.png) +![Overtake](../assets/planning/Overtake_car_trajectory.png) diff --git a/doc/07_planning/Unstuck_Behavior.md b/doc/planning/Unstuck_Behavior.md similarity index 91% rename from doc/07_planning/Unstuck_Behavior.md rename to doc/planning/Unstuck_Behavior.md index 025a46cf..3990fa69 100644 --- a/doc/07_planning/Unstuck_Behavior.md +++ b/doc/planning/Unstuck_Behavior.md @@ -69,5 +69,5 @@ Files influenced by this behavior are: - [motion_planning.py](/code/planning/src/local_planner/motion_planning.py), for the target_speed and overtake - [behavior_speed.py](/code/planning/src/behavior_agent/behaviours/behavior_speed.py), for the target_speed - Acting: - - [vehicle_controller.py](/doc/05_acting/04_vehicle_controller.md), because of driving backwards without steering - - [velocity_controller.py](/doc/05_acting/02_velocity_controller.md), because of the sepcial -3 target_speed case + - [vehicle_controller.py](/doc/acting/vehicle_controller.md), because of driving backwards without steering + - [velocity_controller.py](/doc/acting/velocity_controller.md), because of the sepcial -3 target_speed case diff --git a/doc/07_planning/motion_planning.md b/doc/planning/motion_planning.md similarity index 100% rename from doc/07_planning/motion_planning.md rename to doc/planning/motion_planning.md diff --git a/doc/07_planning/01_py_trees.md b/doc/planning/py_trees.md similarity index 97% rename from doc/07_planning/01_py_trees.md rename to doc/planning/py_trees.md index 5d21de8c..bbc9fb91 100644 --- a/doc/07_planning/01_py_trees.md +++ b/doc/planning/py_trees.md @@ -55,7 +55,7 @@ Run rqt visualization for behaviour tree `rqt --standalone rqt_py_trees.behaviour_tree.RosBehaviourTree` -![img.png](../00_assets/behaviour_tree.png) +![img.png](../assets/behaviour_tree.png) Inspect data written to the behaviour tree diff --git a/doc/03_research/Leaderboard-2/changes_leaderboard2.md b/doc/research/Leaderboard-2/changes_leaderboard2.md similarity index 96% rename from doc/03_research/Leaderboard-2/changes_leaderboard2.md rename to doc/research/Leaderboard-2/changes_leaderboard2.md index 39c23951..077cfb7b 100644 --- a/doc/03_research/Leaderboard-2/changes_leaderboard2.md +++ b/doc/research/Leaderboard-2/changes_leaderboard2.md @@ -16,7 +16,7 @@ Samuel Kühnel Leaderboard 1.0 | Leaderboard 2.0 :-------------------------:|:-------------------------: -![leaderboard-1](../../00_assets/leaderboard-1.png) | ![leaderboard-2](../../00_assets/leaderboard-2.png) +![leaderboard-1](../../assets/leaderboard-1.png) | ![leaderboard-2](../../assets/leaderboard-2.png) As shown in the images above the new leaderboard seems to have way more traffic than the previous version. The leaderboard 2.0 uses an enhanced version of CARLA 0.9.14. So be aware that even if the documentation mentions this version tag, there are probably features missing. Therefore it is recommended to use the latest version. diff --git a/doc/research/Readme.md b/doc/research/Readme.md new file mode 100644 index 00000000..179f8c8b --- /dev/null +++ b/doc/research/Readme.md @@ -0,0 +1,10 @@ +# Research + +This folder contains every research we did before we started the project. + +The research is structured in the following folders: + +- [Acting](./acting/Readme.md) +- [Perception](./perception/Readme.md) +- [Planning](./planning/Readme.md) +- [Requirements](./requirements/Readme.md) diff --git a/doc/research/acting/Readme.md b/doc/research/acting/Readme.md new file mode 100644 index 00000000..8d84f895 --- /dev/null +++ b/doc/research/acting/Readme.md @@ -0,0 +1,11 @@ +# Acting + +This folder contains all the results of our research on acting: + +- **PAF22** +- [Basics](./basics_acting.md) +- [Implementation](./implementation_acting.md) +- **PAF23** +- [PAF21_1 Acting](./paf21_1_acting.md) +- [PAF21_2 Acting & Pylot Control](./paf21_2_and_pylot_acting.md) +- [Autoware Control](./autoware_acting.md) diff --git a/doc/03_research/01_acting/05_autoware_acting.md b/doc/research/acting/autoware_acting.md similarity index 100% rename from doc/03_research/01_acting/05_autoware_acting.md rename to doc/research/acting/autoware_acting.md diff --git a/doc/03_research/01_acting/01_basics_acting.md b/doc/research/acting/basics_acting.md similarity index 96% rename from doc/03_research/01_acting/01_basics_acting.md rename to doc/research/acting/basics_acting.md index 1b6b41f2..3b4dce32 100644 --- a/doc/03_research/01_acting/01_basics_acting.md +++ b/doc/research/acting/basics_acting.md @@ -67,12 +67,12 @@ The steering angle $\delta$ is defined as the angle of the front wheel to a line This angle $\delta$ can also be defined as $tan(\delta) = L/R$ with $L$ as the wheelbase and $R$ the radius from the reference point (rear axle) to the Instantaneous Center of Rotation (ICR). Due to the bicycle model we can calculate $R = \frac{L}{tan(\delta)}$. -![Bicycle Model with ICR](../../00_assets/research_assets/bicyclegeometry.png) +![Bicycle Model with ICR](../../assets/research_assets/bicyclegeometry.png) *source: [[2]](https://medium.com/roboquest/understanding-geometric-path-tracking-algorithms-stanley-controller-25da17bcc219)* We now try to aim the circular arc to intersect with a point on our trajectory. This target point is always a defined distance (look ahead distance $l_d$) away from our reference point (dangling carrot). This leads to the following relation: -![Dangling carrot geometry](../../00_assets/research_assets/danglingcarrotgeometry.png) +![Dangling carrot geometry](../../assets/research_assets/danglingcarrotgeometry.png) *source: [[2]](https://medium.com/roboquest/understanding-geometric-path-tracking-algorithms-stanley-controller-25da17bcc219)* $\frac{l_d}{sin(\alpha)}= 2R$, where $\alpha$ is the current heading error. Combining the two equations leads to our desired steering angle. @@ -98,7 +98,7 @@ $$ The Stanley controller, named after an autonomous offroad race car, takes the front axle as a reference, while still using the bicycle model. In addition to looking at the heading error $\psi$, close to what pure pursuit does, stanley also looks at the cross track error $e_e$. The cross track error $e_e$ is defined as the distance between the reference point and the closest point on our trajectory. -![Stanley error with heading and cross track error](../../00_assets/research_assets/stanleyerror.png) +![Stanley error with heading and cross track error](../../assets/research_assets/stanleyerror.png) *source: [[2]](https://medium.com/roboquest/understanding-geometric-path-tracking-algorithms-stanley-controller-25da17bcc219)* The first part of our steering angle tries to correct for this error $arctan(\frac{k_e*e_e}{k_v*v})$ while the second part just corrects for our heading error $\psi$. @@ -115,7 +115,7 @@ With $k_e$ and $k_v$ being tuneable parameters for cross tracking error and spee The basic idea of MPC is to model the future behavior of the vehicle and compute an optimal control input that, minimizes an a priori defined cost functional. -![MPC Controller](../../00_assets/research_assets/mpc.png) +![MPC Controller](../../assets/research_assets/mpc.png) *source: [[5]](https://dingyan89.medium.com/three-methods-of-vehicle-lateral-control-pure-pursuit-stanley-and-mpc-db8cc1d32081)* - cost function can be designed to account for driving comfort @@ -125,7 +125,7 @@ The basic idea of MPC is to model the future behavior of the vehicle and compute SMC systems are designed to drive the system states onto a particular surface in the state space, named sliding surface. Once the sliding surface is reached, sliding mode control keeps the states on the close neighborhood of the sliding surface. Real implementations of sliding mode control approximate theoretical behavior with a high-frequency and generally non-deterministic switching control signal that causes the system to chatter. -![chattering](../../00_assets/research_assets/chattering.gif) +![chattering](../../assets/research_assets/chattering.gif) *source: [[9]](https://ieeexplore.ieee.org/document/1644542)* - simple diff --git a/doc/03_research/01_acting/02_implementation_acting.md b/doc/research/acting/implementation_acting.md similarity index 96% rename from doc/03_research/01_acting/02_implementation_acting.md rename to doc/research/acting/implementation_acting.md index dd7b45d5..fc763b36 100644 --- a/doc/03_research/01_acting/02_implementation_acting.md +++ b/doc/research/acting/implementation_acting.md @@ -25,7 +25,7 @@ Gabriel Schwald - [Next steps](#next-steps) -This document sums up all functions already agreed upon in [#24](https://github.com/ll7/paf22/issues/24) regarding [acting](../01_acting/01_acting.md), that could be implemented in the next sprint. +This document sums up all functions already agreed upon in [#24](https://github.com/ll7/paf22/issues/24) regarding [acting](../acting/acting.md), that could be implemented in the next sprint. ## Planned basic implementation of the Acting domain diff --git a/doc/03_research/01_acting/03_paf21_1_acting.md b/doc/research/acting/paf21_1_acting.md similarity index 87% rename from doc/03_research/01_acting/03_paf21_1_acting.md rename to doc/research/acting/paf21_1_acting.md index c76dad25..0b03937d 100644 --- a/doc/03_research/01_acting/03_paf21_1_acting.md +++ b/doc/research/acting/paf21_1_acting.md @@ -9,7 +9,7 @@ - Can detect curves on the planned trajectory - Calculates the speed in which to drive the detected Curve -![Curve](../../00_assets/research_assets/curve_detection_paf21_1.png) +![Curve](../../assets/research_assets/curve_detection_paf21_1.png) ## Speed Control @@ -24,7 +24,7 @@ - **Stanley Steering Controller** - Calculates steering angle from offset and heading error - includes PID controller - ![Stanley Controller](../../00_assets/research_assets/stanley_paf21_1.png) + ![Stanley Controller](../../assets/research_assets/stanley_paf21_1.png) ### Detected Curves diff --git a/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md b/doc/research/acting/paf21_2_and_pylot_acting.md similarity index 97% rename from doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md rename to doc/research/acting/paf21_2_and_pylot_acting.md index bb372dd6..8dfa05d6 100644 --- a/doc/03_research/01_acting/04_paf21_2_and_pylot_acting.md +++ b/doc/research/acting/paf21_2_and_pylot_acting.md @@ -9,13 +9,13 @@ - Lateral control - Pure Pursuit controller - ![Untitled](../../00_assets/research_assets/pure_pursuit.png) + ![Untitled](../../assets/research_assets/pure_pursuit.png) - Stanley controller - ![Untitled](../../00_assets/research_assets/stanley_controller.png) + ![Untitled](../../assets/research_assets/stanley_controller.png) -### [List of Inputs/Outputs](https://github.com/una-auxme/paf/blob/main/doc/03_research/01_acting/02_acting_implementation.md#list-of-inputsoutputs) +### [List of Inputs/Outputs](https://github.com/una-auxme/paf/blob/main/doc/research/acting/acting_implementation.md#list-of-inputsoutputs) - Subscribes to: - [nav_msgs/Odometry Message](http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) : to get the current position and heading @@ -25,7 +25,7 @@ - Publishes: - [CarlaEgoVehicleControl.msg](https://carla.readthedocs.io/projects/ros-bridge/en/latest/ros_msgs/#carlaegovehiclecontrolmsg) : to actually control the vehicles throttle, steering -### [Challenges](https://github.com/una-auxme/paf/blob/main/doc/03_research/01_acting/02_acting_implementation.md#challenges) +### [Challenges](https://github.com/una-auxme/paf/blob/main/doc/research/acting/acting_implementation.md#challenges) A short list of challenges for the implementation of a basic acting domain and how they these could be tackled based on the requirements mentioned above. @@ -42,7 +42,7 @@ A short list of challenges for the implementation of a basic acting domain and h ### [Standardroutine](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#standardroutine) -![Untitled](../../00_assets/research_assets/standard_routine_paf21_2.png) +![Untitled](../../assets/research_assets/standard_routine_paf21_2.png) - Longitudinal control - PID controller @@ -149,7 +149,7 @@ Timer und Schwellenwerte um Stuck Situation zu erkennen ### [Messages](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#messages) -![Untitled](../../00_assets/research_assets/messages_paf21_2.png) +![Untitled](../../assets/research_assets/messages_paf21_2.png) ### [StanleyController](https://github.com/ll7/paf21-2/tree/main/paf_ros/paf_actor#stanleycontroller) @@ -224,7 +224,7 @@ implements a longitudinal and lateral controller - Predicts future states using a kinematic model to optimize control inputs. - Parameters include mpc_horizon, mpc_steps, and mpc_weights -![Untitled](../../00_assets/research_assets/mpc.png) +![Untitled](../../assets/research_assets/mpc.png) • cost function can be designed to account for driving comfort diff --git a/doc/03_research/02_perception/LIDAR_data.md b/doc/research/perception/LIDAR_data.md similarity index 96% rename from doc/03_research/02_perception/LIDAR_data.md rename to doc/research/perception/LIDAR_data.md index b55cf7e4..ac62fa87 100644 --- a/doc/03_research/02_perception/LIDAR_data.md +++ b/doc/research/perception/LIDAR_data.md @@ -9,7 +9,7 @@ LIDAR-Data comes in Pointclouds from a specific LIDAR-Topic. `rospy.Subscriber(rospy.get_param('~source_topic', "/carla/hero/LIDAR"), PointCloud2, self.callback)` -Read more about the LIDAR-Sensor [here](https://github.com/una-auxme/paf/blob/main/doc/06_perception/03_lidar_distance_utility.md) +Read more about the LIDAR-Sensor [here](https://github.com/una-auxme/paf/blob/main/doc/perception/lidar_distance_utility.md) ## Processing diff --git a/doc/research/perception/Readme.md b/doc/research/perception/Readme.md new file mode 100644 index 00000000..8e6c5108 --- /dev/null +++ b/doc/research/perception/Readme.md @@ -0,0 +1,12 @@ +# Perception + +This folder contains all the results of research on perception: + +- **PAF22** + - [Basics](./basics.md) + - [First implementation plan](./first_implementation_plan.md) +- **PAF23** + - [Pylot Perception](./pylot.md) + - [PAF_21_2 Perception](./Research_PAF21-Perception.md) + - [PAF_21_1_Perception](./paf_21_1_perception.md) +- [Autoware Perception](./autoware-perception.md) diff --git a/doc/03_research/02_perception/05_Research_PAF21-Perception.md b/doc/research/perception/Research_PAF21-Perception.md similarity index 100% rename from doc/03_research/02_perception/05_Research_PAF21-Perception.md rename to doc/research/perception/Research_PAF21-Perception.md diff --git a/doc/03_research/02_perception/05-autoware-perception.md b/doc/research/perception/autoware-perception.md similarity index 100% rename from doc/03_research/02_perception/05-autoware-perception.md rename to doc/research/perception/autoware-perception.md diff --git a/doc/03_research/02_perception/02_basics.md b/doc/research/perception/basics.md similarity index 100% rename from doc/03_research/02_perception/02_basics.md rename to doc/research/perception/basics.md diff --git a/doc/03_research/02_perception/03_first_implementation_plan.md b/doc/research/perception/first_implementation_plan.md similarity index 97% rename from doc/03_research/02_perception/03_first_implementation_plan.md rename to doc/research/perception/first_implementation_plan.md index 65ddbbc2..4640b2cf 100644 --- a/doc/03_research/02_perception/03_first_implementation_plan.md +++ b/doc/research/perception/first_implementation_plan.md @@ -39,7 +39,7 @@ Marco Riedenauer ## Overview -![Implementation Plan Perception](../../00_assets/implementation_plan_perception.jpg) +![Implementation Plan Perception](../../assets/implementation_plan_perception.jpg) --- @@ -66,7 +66,7 @@ There are three different kinds of image segmentation: - **Panoptic Segmentation**: \ Combination of semantic segmentation and instance segmentation. Detection of stuff plus instances of things. -![Segmentation](../../00_assets/segmentation.png) +![Segmentation](../../assets/segmentation.png) [Source](https://www.v7labs.com/blog/panoptic-segmentation-guide) ### Image Panoptic Segmentation diff --git a/doc/03_research/02_perception/06_paf_21_1_perception.md b/doc/research/perception/paf_21_1_perception.md similarity index 100% rename from doc/03_research/02_perception/06_paf_21_1_perception.md rename to doc/research/perception/paf_21_1_perception.md diff --git a/doc/03_research/02_perception/04_pylot.md b/doc/research/perception/pylot.md similarity index 100% rename from doc/03_research/02_perception/04_pylot.md rename to doc/research/perception/pylot.md diff --git a/doc/03_research/03_planning/Readme.md b/doc/research/planning/Readme.md similarity index 81% rename from doc/03_research/03_planning/Readme.md rename to doc/research/planning/Readme.md index 67c5f196..7ab5f590 100644 --- a/doc/03_research/03_planning/Readme.md +++ b/doc/research/planning/Readme.md @@ -3,5 +3,5 @@ This folder contains all the results of research on planning from PAF 23 and 22. The research documents from the previous project were kept as they contain helpful information. The documents are separated in different folders: -- **[PAF22](./00_paf22/)** -- **[PAF23](./00_paf23/)** +- **[PAF22](./paf22/)** +- **[PAF23](./paf23/)** diff --git a/doc/03_research/03_planning/00_paf22/03_Implementation.md b/doc/research/planning/paf22/Implementation.md similarity index 94% rename from doc/03_research/03_planning/00_paf22/03_Implementation.md rename to doc/research/planning/paf22/Implementation.md index adfaa6dd..4a5c9f7d 100644 --- a/doc/03_research/03_planning/00_paf22/03_Implementation.md +++ b/doc/research/planning/paf22/Implementation.md @@ -34,7 +34,7 @@ Simon Erlbacher, Niklas Vogel ## Overview -![Implementation](../../00_assets/Planning_Implementierung.png) +![Implementation](../../assets/Planning_Implementierung.png) [Link to original](https://miro.com/app/board/uXjVP_LIQpE=/?share_link_id=806357474480) --- @@ -48,7 +48,7 @@ Either you use the given waypoints for start and goal values or alternatively th The Output (Solution of the planning problem) will be a route defined by a sequence of lanelets and a sequence of points (~ 10cm apart). Lanelet Model Example : -![Lanelet Model Example](../../00_assets/Lanelets.png) +![Lanelet Model Example](../../assets/Lanelets.png) [(Source)](https://github.com/ll7/psaf2/tree/main/Planning/global_planner) Input: @@ -66,10 +66,10 @@ Output: ## Decision Making -If an obstacle, which interferes with the own trajectory, is being recognized in the [perception](../02_perception), +If an obstacle, which interferes with the own trajectory, is being recognized in the [perception](../perception), the decision making sends a message to the local path planning where the system then chooses another trajectory/lanelet. With the Lanelets Model it is easier to give a prediction for other objects and the vehicle itself, -by following the lane direction of an object. With the prediction, which is mainly based inside the [perception](../02_perception), +by following the lane direction of an object. With the prediction, which is mainly based inside the [perception](../perception), it's then possible to check weather or not other objects interfere with ourselves. The decision making can be implemented with a state machine. Therefore there must be a state defined for every incoming perception/situation to ensure correct and save behavior. @@ -95,7 +95,7 @@ Local Planner updates the current route, if Decision Making detects an obstacle. The Local Path Planer receives the lanelets, points and the path to drive. The local planner creates a velocity profile on the calculated trajectory based on curvature, crossings and traffic lights. -This will be calculated directly after the preplanning created a trajectory. The velocity value is published to the [acting side](../01_acting). +This will be calculated directly after the preplanning created a trajectory. The velocity value is published to the [acting side](../acting). Input: @@ -125,7 +125,7 @@ Output: ### Measure distance -This module measures the distance to obstacles, especially cars, with the Lidar Sensor. The current distance value is published to the [acting side](../01_acting) for keeping a safe distance (Adaptive Cruise Control). +This module measures the distance to obstacles, especially cars, with the Lidar Sensor. The current distance value is published to the [acting side](../acting) for keeping a safe distance (Adaptive Cruise Control). Input: diff --git a/doc/03_research/03_planning/00_paf22/05_Navigation_Data.md b/doc/research/planning/paf22/Navigation_Data.md similarity index 100% rename from doc/03_research/03_planning/00_paf22/05_Navigation_Data.md rename to doc/research/planning/paf22/Navigation_Data.md diff --git a/doc/03_research/03_planning/00_paf22/07_OpenDrive.md b/doc/research/planning/paf22/OpenDrive.md similarity index 97% rename from doc/03_research/03_planning/00_paf22/07_OpenDrive.md rename to doc/research/planning/paf22/OpenDrive.md index e25b8b60..99999139 100644 --- a/doc/03_research/03_planning/00_paf22/07_OpenDrive.md +++ b/doc/research/planning/paf22/OpenDrive.md @@ -95,7 +95,7 @@ to search for the relevant area - Information about the reference lines (line which seperates lanes) and their layout (linear, arc, cubic curves) - Information about the maximum speed -![OpenDrive stop sign](../../00_assets/Stop_sign_OpenDrive.png) +![OpenDrive stop sign](../../assets/Stop_sign_OpenDrive.png) Impression of the format There are a lot of infomrations in the file. Also a lot of information, which is not relevant for our project. @@ -220,7 +220,7 @@ at the beginning, when the ego-vehicle stays at its start position. is shorter to the target than the upper blue line. The method would choose the lower line because of the smaller distance -![preplanning_start](../../00_assets/preplanning_start.png) +![preplanning_start](../../assets/preplanning_start.png) Road Concepts @@ -233,7 +233,7 @@ the reference line. That is why we have to filter the width information for our - after this we have the information which of the two perpendicular vectors we need to compute the points on the correct side of the reference line - we always choose the biggest width value, to take the rightmost lane -![lane_midpoint](../../00_assets/lane_midpoint.png) +![lane_midpoint](../../assets/lane_midpoint.png) Scenario and concept to compute the midpoint of a lane - the second method takes the target position and the next command from the leaderboard @@ -270,19 +270,19 @@ At the moment we assume they are before a junction. In the following test scenario we added a manual start point on road 8. The following target points and commandos for the next action also have been added manual. -![roads_vis](../../00_assets/roads_vis.png) +![roads_vis](../../assets/roads_vis.png) roads to interpolate -![trajectory_roads](../../00_assets/trajectory_roads.png) +![trajectory_roads](../../assets/trajectory_roads.png) roads chosen by the methods -![global_trajectory](../../00_assets/global_trajectory.png) +![global_trajectory](../../assets/global_trajectory.png) Global trajectory visualised -![local_trajectory](../../00_assets/local_trajectory.png) +![local_trajectory](../../assets/local_trajectory.png) One cutout of the trajectory diff --git a/doc/03_research/03_planning/00_paf22/02_basics.md b/doc/research/planning/paf22/basics.md similarity index 93% rename from doc/03_research/03_planning/00_paf22/02_basics.md rename to doc/research/planning/paf22/basics.md index 16f63ee7..bd40b06e 100644 --- a/doc/03_research/03_planning/00_paf22/02_basics.md +++ b/doc/research/planning/paf22/basics.md @@ -147,17 +147,17 @@ Quellen: - - -![architektur gewinnterteam19](../../00_assets/gewinnerteam19-architektur.png) +![architektur gewinnterteam19](../../assets/gewinnerteam19-architektur.png) Übersicht zu einer möglichen Architektur (Gewinnerteam des ersten Wettbewerbes) -![sensoranordnung](../../00_assets/sensoranordnung.png) +![sensoranordnung](../../assets/sensoranordnung.png) Mögliche Anordnung und Anzahl von Sensoren. (6 Kameras, 1 LIDAR, 2 GPS) ## Planning Unterteilung -![planning uebersicht](../../00_assets/planning%20%C3%BCbersicht.png) +![planning uebersicht](../../assets/planning%20%C3%BCbersicht.png) Planning Übersicht @@ -174,7 +174,7 @@ Die Position des Fahrzeuges kann durch die zwei GPS Tracker bestimmt werden und Alt steht hierbei für altitude und beschreibt die gemessene Höhe durch die GPS Tracker. Der Winkel gibt hierbei die Orientierung des Fahrzeuges an. Der x und y Wert beinhaltet die Koordinaten des hinteren GPS Trackers. -![positionsvektor](../../00_assets/positionsvektor.png) ![fahrzeugwinkelberechnung](../../00_assets/fahrzeugwinkelberechnung.png) +![positionsvektor](../../assets/positionsvektor.png) ![fahrzeugwinkelberechnung](../../assets/fahrzeugwinkelberechnung.png) Positionsvektor und Berechnung des Fahrzeugwinkels zur Zielposition @@ -183,7 +183,7 @@ Wenn das GPS Signal allerdings fehlerhaft ist bzw. Störungen ausgesetzt ist, gi In diesem Fall wird ein Kalman Filter impolementiert. Er kommt mit Störungen zurecht und gibt auf Basis der aktuellen Position eine gute Vorhersage für zukünftige Zustände des Fahrzeuges. -![fahrzeugpositionsberechnung](../../00_assets/fahrzeugpositionsberechnung.png) +![fahrzeugpositionsberechnung](../../assets/fahrzeugpositionsberechnung.png) Berechnung der aktuellen und zukünftigen Fahrzeugposition @@ -193,7 +193,7 @@ Mit dem LIDAR Sensor werden Punktewolken in der Umgebung erzeugt. Diese werden mit dem DBSCAN Algorithmus geclustert. Er kommt gut mit outlinern klar und kann diese entsprechend ignorieren. Mit Calipers Algorithmus aus der OpenCV Bibliothek wird für jedes Cluster das kleinst mögliche Rechteck, welches das Cluster fitted, erzeugt. -![lidarhinderniserkennung](../../00_assets/lidarhinderniserkennung.png) +![lidarhinderniserkennung](../../assets/lidarhinderniserkennung.png) Erkennen von Hindernissen mit dem LIDAR Sensor @@ -205,15 +205,15 @@ Das Skalarprodukt ist hiermit nahe 0. Es wurde also ein Hinderniss erkannt. Der Dies soll einen Outliner darstellen. Druch das Einführen eines Thresholds können diese Detektionen ausgeschlossen werden. Hindernisse mit dem Occupacy Grid erkennen. Somit einfach Abstand der Punkte in einer Gridzelle mit dem Mittelpunkt eines Kreises berechnen und prüfen ob die Distanz kleiner als der Radius ist. -![occupancygrid](../../00_assets/occupancygrid.png) +![occupancygrid](../../assets/occupancygrid.png) 360 Grad Occupacy Grid -![fahrzeugapproximation](../../00_assets/fahrzeugapproximation.png) +![fahrzeugapproximation](../../assets/fahrzeugapproximation.png) Approximation eines Fahrzeuges mit drei Kreisen -![kollisionsberechnung](../../00_assets/kollisionsberechnung.png) +![kollisionsberechnung](../../assets/kollisionsberechnung.png) Einfache Berechnung einer Kollision @@ -239,14 +239,14 @@ Annahme: Alle Verkehrsteilnehmer haben konstante Geschwindigkeit (sonst Berechnu ## Decision Making (Behaviour Planner) -![kreuzungszonen](../../00_assets/kreuzungszonen.png) +![kreuzungszonen](../../assets/kreuzungszonen.png) Verkehrsszenario einer Kreuzung mit verschiedenen Zonen. - Roter Bereich: Fahrzeug verlangsamt seine Geschwindigkeit - Grüner Bereich: Fahrzeug kommt zum stehen - Oranger Bereich (Intersection): Fahrzeug betritt diesen Bereich nur,wenn kein anderer Verkehrsteilnehmer in diesem erkannt wird -![statemachines](../../00_assets/statemachines.png) +![statemachines](../../assets/statemachines.png) Aufteilung in mehrere state machines @@ -271,19 +271,19 @@ Probleme ist hierbei das Umplanen der Trajekotrie durch unerwartete Hindernisse Das Fahrzeug muss seine zukünftigen Aktionen, eigene Zustandsübergange, Zustandsübergänge anderer Agenten einbeziehen (zB. Umschalten einer Ampel). Es wird ein Input Vektor aus dem Bycicle Modell benötigt. -![berechnungsmodell](../../00_assets/berechnungsmodell.png) +![berechnungsmodell](../../assets/berechnungsmodell.png) Modell für die Berechnung der aktuellen und zukünftigen Fahrzeugposition -![trajektorienberechnung](../../00_assets/trajektorienberechnung.png) +![trajektorienberechnung](../../assets/trajektorienberechnung.png) Berechnung einer Trajektorie -![optimierungsvisualisierung](../../00_assets/optimierungsvisualisierung.png) +![optimierungsvisualisierung](../../assets/optimierungsvisualisierung.png) Visualisierung des Optimierungsprozesses bei der Trajektorienbildung -![trajekorienfehlermin](../../00_assets/trajekorienfehlermin.png) +![trajekorienfehlermin](../../assets/trajekorienfehlermin.png) Fehlerminimierung bei der Trajektorienberechnung diff --git a/doc/03_research/03_planning/00_paf22/04_decision_making.md b/doc/research/planning/paf22/decision_making.md similarity index 100% rename from doc/03_research/03_planning/00_paf22/04_decision_making.md rename to doc/research/planning/paf22/decision_making.md diff --git a/doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md b/doc/research/planning/paf22/reevaluation_desicion_making.md similarity index 100% rename from doc/03_research/03_planning/00_paf22/07_reevaluation_desicion_making.md rename to doc/research/planning/paf22/reevaluation_desicion_making.md diff --git a/doc/03_research/03_planning/00_paf22/06_state_machine_design.md b/doc/research/planning/paf22/state_machine_design.md similarity index 97% rename from doc/03_research/03_planning/00_paf22/06_state_machine_design.md rename to doc/research/planning/paf22/state_machine_design.md index ad57715d..53bed6fb 100644 --- a/doc/03_research/03_planning/00_paf22/06_state_machine_design.md +++ b/doc/research/planning/paf22/state_machine_design.md @@ -44,7 +44,7 @@ Josef Kircher ## Super state machine -![img.png](../../00_assets/Super_SM.png) +![img.png](../../assets/Super_SM.png) The super state machine functions as a controller of the main functions of the agent. @@ -56,7 +56,7 @@ Those functions are ## Driving state machine -![img.png](../../00_assets/Driving_SM.png) +![img.png](../../assets/Driving_SM.png) Transition: @@ -87,7 +87,7 @@ Set a new target speed and change back to `KEEP` state afterwards. ## Lane change state machine -![img.png](../../00_assets/Lane_Change_SM.png) +![img.png](../../assets/Lane_Change_SM.png) Transition: @@ -142,7 +142,7 @@ The lane change should be performed if the lane is free and there are no fast mo ## Intersection state machine -![img.png](../../00_assets/Intersection_SM.png) +![img.png](../../assets/Intersection_SM.png) Transition: @@ -201,7 +201,7 @@ Transition: ## Stop sign/traffic light state machine -![img.png](../../00_assets/Traffic_SM.png) +![img.png](../../assets/Traffic_SM.png) Transition: diff --git a/doc/03_research/03_planning/00_paf23/04_Local_planning_for_first_milestone.md b/doc/research/planning/paf23/Local_planning_for_first_milestone.md similarity index 82% rename from doc/03_research/03_planning/00_paf23/04_Local_planning_for_first_milestone.md rename to doc/research/planning/paf23/Local_planning_for_first_milestone.md index 431ffee9..48cbc1c2 100644 --- a/doc/03_research/03_planning/00_paf23/04_Local_planning_for_first_milestone.md +++ b/doc/research/planning/paf23/Local_planning_for_first_milestone.md @@ -16,7 +16,7 @@ Julius Miller Paper: [Behavior Planning for Autonomous Driving: Methodologies, Applications, and Future Orientation](https://www.researchgate.net/publication/369181112_Behavior_Planning_for_Autonomous_Driving_Methodologies_Applications_and_Future_Orientation) -![Overview_interfaces](../../../00_assets/planning/overview_paper1.png) +![Overview_interfaces](../../../assets/planning/overview_paper1.png) Rule-based planning @@ -49,7 +49,7 @@ Leader, Track-Speed Github: [Decision Making with Behaviour Tree](https://github.com/kirilcvetkov92/Path-planning?source=post_page-----8db1575fec2c--------------------------------) -![github_tree](../../../00_assets/planning/BehaviorTree_medium.png) +![github_tree](../../../assets/planning/BehaviorTree_medium.png) - No Intersection - Collision Detection in behaviour Tree @@ -58,7 +58,7 @@ Paper: [Behavior Trees for decision-making in Autonomous Driving](https://www.diva-portal.org/smash/get/diva2:907048/FULLTEXT01.pdf) -![Behaviour Tree](../../../00_assets/planning/BT_paper.png) +![Behaviour Tree](../../../assets/planning/BT_paper.png) - simple simulation - Car only drives straight @@ -81,17 +81,17 @@ Low Level Decision: - Emergency Brake - ACC -![localplan](../../../00_assets/planning/localplan.png) +![localplan](../../../assets/planning/localplan.png) Scenarios: -![Intersection](../../../00_assets/planning/intersection_scenario.png) +![Intersection](../../../assets/planning/intersection_scenario.png) Left: Behaviour Intersection is triggered for motion planning, acc publishes speed. -> Lower speed is used to approach intersection Right: Behaviour Intersection is used for motion planning, acc is ignored (no object in front) -![Overtake](../../../00_assets/planning/overtaking_scenario.png) +![Overtake](../../../assets/planning/overtaking_scenario.png) Left: Overtake gets triggered to maintain speed, acc is ignored diff --git a/doc/03_research/03_planning/00_paf23/03_PlannedArchitecture.md b/doc/research/planning/paf23/PlannedArchitecture.md similarity index 91% rename from doc/03_research/03_planning/00_paf23/03_PlannedArchitecture.md rename to doc/research/planning/paf23/PlannedArchitecture.md index 172cb9ca..2bb98cac 100644 --- a/doc/03_research/03_planning/00_paf23/03_PlannedArchitecture.md +++ b/doc/research/planning/paf23/PlannedArchitecture.md @@ -4,7 +4,7 @@ Provide an overview for a possible planning architecture consisting of Global P ## Overview -![overview](../../../00_assets/planning/overview.png) +![overview](../../../assets/planning/overview.png) The **Global Plan** gathers all data relevant to build a copy of the town the car is driving in. It also computes an optimal global path, which includes all waypoints. The Decision Making can order a recalculation of the global path. @@ -19,7 +19,7 @@ Motions like lane changing must be approved by the decision making and they get ### Global Plan -![overview](../../../00_assets/planning/Globalplan.png) +![overview](../../../assets/planning/Globalplan.png) *Map Generator:* Gathers map data from Carla and prepares it for the PrePlanner @@ -69,7 +69,7 @@ See Behaviour Tree. ### Local Plan -![Local Plan](../../../00_assets/planning/localplan.png) +![Local Plan](../../../assets/planning/localplan.png) *Local Preplan:* Segements the global path and calculates the middle of the lane. Is not called in every cycle. @@ -128,4 +128,4 @@ See Behaviour Tree. Red must have for next Milestone, Orange needed for future milestones, Green can already be used or is not that important -![prios](../../../00_assets/planning/prios.png) +![prios](../../../assets/planning/prios.png) diff --git a/doc/03_research/03_planning/00_paf23/01_Planning.md b/doc/research/planning/paf23/Planning.md similarity index 95% rename from doc/03_research/03_planning/00_paf23/01_Planning.md rename to doc/research/planning/paf23/Planning.md index a36283ac..0229ced2 100644 --- a/doc/03_research/03_planning/00_paf23/01_Planning.md +++ b/doc/research/planning/paf23/Planning.md @@ -6,7 +6,7 @@ Finding the optimal path from start to goal, taking into account the static and ### [PAF21 - 2](https://github.com/ll7/paf21-2) -![Planning](../../../00_assets/planning/Planning_paf21.png) +![Planning](../../../assets/planning/Planning_paf21.png) Input: @@ -55,7 +55,7 @@ Map Manager ### [Autoware](https://github.com/autowarefoundation/autoware) -![https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-architecture/planning/](../../../00_assets/planning/Planning.png) +![https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-architecture/planning/](../../../assets/planning/Planning.png) Input: diff --git a/doc/03_research/03_planning/00_paf23/02_PlanningPaf22.md b/doc/research/planning/paf23/PlanningPaf22.md similarity index 92% rename from doc/03_research/03_planning/00_paf23/02_PlanningPaf22.md rename to doc/research/planning/paf23/PlanningPaf22.md index 605605e7..85902213 100644 --- a/doc/03_research/03_planning/00_paf23/02_PlanningPaf22.md +++ b/doc/research/planning/paf23/PlanningPaf22.md @@ -4,7 +4,7 @@ ## Architecture -![overview](../../../00_assets/planning/overview.jpg) +![overview](../../../assets/planning/overview.jpg) ### Preplanning diff --git a/doc/03_research/03_planning/00_paf23/09_Research_Pylot_Planning.md b/doc/research/planning/paf23/Research_Pylot_Planning.md similarity index 100% rename from doc/03_research/03_planning/00_paf23/09_Research_Pylot_Planning.md rename to doc/research/planning/paf23/Research_Pylot_Planning.md diff --git a/doc/03_research/03_planning/00_paf23/Testing_frenet_trajectory_planner.md b/doc/research/planning/paf23/Testing_frenet_trajectory_planner.md similarity index 97% rename from doc/03_research/03_planning/00_paf23/Testing_frenet_trajectory_planner.md rename to doc/research/planning/paf23/Testing_frenet_trajectory_planner.md index ea300c31..b2581027 100644 --- a/doc/03_research/03_planning/00_paf23/Testing_frenet_trajectory_planner.md +++ b/doc/research/planning/paf23/Testing_frenet_trajectory_planner.md @@ -33,7 +33,7 @@ A test python file is also located [here](test_traj.py). The below image was gen The orange points represent a possible object and the blue points the old (left) and new (right) trajectory. -![test_trajectory](../../../00_assets/planning/test_frenet_results.png) +![test_trajectory](../../../assets/planning/test_frenet_results.png) ## Inputs diff --git a/doc/03_research/03_planning/00_paf23/08_paf21-1.md b/doc/research/planning/paf23/paf21-1.md similarity index 93% rename from doc/03_research/03_planning/00_paf23/08_paf21-1.md rename to doc/research/planning/paf23/paf21-1.md index 254dd811..a358a641 100644 --- a/doc/03_research/03_planning/00_paf23/08_paf21-1.md +++ b/doc/research/planning/paf23/paf21-1.md @@ -11,7 +11,7 @@ In PAF21-1, they divided the planning stage into two major components: - Global Planner - Local Planner -A more detailed explanation is already present in the [basics](../00_paf22/02_basics.md#paf-2021-1) chapter. +A more detailed explanation is already present in the [basics](../paf22/basics.md#paf-2021-1) chapter. --- diff --git a/doc/03_research/03_planning/00_paf23/test_traj.py b/doc/research/planning/paf23/test_traj.py similarity index 100% rename from doc/03_research/03_planning/00_paf23/test_traj.py rename to doc/research/planning/paf23/test_traj.py diff --git a/doc/research/requirements/Readme.md b/doc/research/requirements/Readme.md new file mode 100644 index 00000000..28fce181 --- /dev/null +++ b/doc/research/requirements/Readme.md @@ -0,0 +1,7 @@ +# Requirements + +This folder contains all the results of our research on requirements: + +- [Leaderboard information](./informations_from_leaderboard.md) +- [Reqirements for agent](./requirements.md) +- [Use case scenarios](./use_cases.md) diff --git a/doc/03_research/04_requirements/02_informations_from_leaderboard.md b/doc/research/requirements/informations_from_leaderboard.md similarity index 100% rename from doc/03_research/04_requirements/02_informations_from_leaderboard.md rename to doc/research/requirements/informations_from_leaderboard.md diff --git a/doc/03_research/04_requirements/03_requirements.md b/doc/research/requirements/requirements.md similarity index 89% rename from doc/03_research/04_requirements/03_requirements.md rename to doc/research/requirements/requirements.md index 953bd900..8612e3c3 100644 --- a/doc/03_research/04_requirements/03_requirements.md +++ b/doc/research/requirements/requirements.md @@ -38,7 +38,7 @@ Josef Kircher, Simon Erlbacher ## Prioritized driving aspects -There are different ways to prioritize the driving aspects mentioned in the document [08_use_cases](https://github.com/ll7/paf22/blob/482c1f5a201b52276d7b77cf402009bd99c93317/doc/03_research/08_use_cases.md). +There are different ways to prioritize the driving aspects mentioned in the document [use_cases](https://github.com/ll7/paf22/blob/482c1f5a201b52276d7b77cf402009bd99c93317/doc/research/use_cases.md). The most important topics, in relation to this project, are the driving score and the safety aspect. Also, it is appropriate to implement the basic features of an autonomous car first. The list is a mixture of the different approaches. Prioritizing from very important functionalities to less important features. diff --git a/doc/03_research/04_requirements/04_use_cases.md b/doc/research/requirements/use_cases.md similarity index 97% rename from doc/03_research/04_requirements/04_use_cases.md rename to doc/research/requirements/use_cases.md index ee58d615..cf9a6570 100644 --- a/doc/03_research/04_requirements/04_use_cases.md +++ b/doc/research/requirements/use_cases.md @@ -179,7 +179,7 @@ Josef Kircher ## 1. Control loss due to bad road condition -![img](../../00_assets/TR01.png) +![img](../../assets/TR01.png) ### Description @@ -207,7 +207,7 @@ None ## 2. Unprotected left turn at intersection with oncoming traffic -![img](../../00_assets/TR08.png) +![img](../../assets/TR08.png) ### Description @@ -254,7 +254,7 @@ Turn left at the intersection without violating traffic rules ## 3. Right turn at an intersection with crossing traffic -![img](../../00_assets/TR09.png) +![img](../../assets/TR09.png) ### Description @@ -300,7 +300,7 @@ Turn right at the intersection without violating traffic rules ## 4. Crossing negotiation at unsignalized intersection -![img](../../00_assets/TR10.png) +![img](../../assets/TR10.png) ### Description @@ -343,7 +343,7 @@ Cross the intersection without violating traffic rules ## 5. Crossing traffic running a red light at intersection -![img](../../00_assets/TR07.png) +![img](../../assets/TR07.png) ### Description @@ -380,7 +380,7 @@ Emergency brake to avoid collision ## 6. Highway merge from on-ramp -![img](../../00_assets/TR18.png) +![img](../../assets/TR18.png) ### Description @@ -420,7 +420,7 @@ Join the highway traffic without any traffic violation ## 7. Highway cut-in from on-ramp -![img](../../00_assets/TR19.png) +![img](../../assets/TR19.png) ### Description @@ -462,7 +462,7 @@ Let vehicle join the highway traffic without any traffic violation ## 8. Static cut-in -![img](../../00_assets/TR20.png) +![img](../../assets/TR20.png) ### Description @@ -504,7 +504,7 @@ Let vehicle join the lane without any traffic violation ## 9. Highway exit -![img](../../00_assets/TR21.png) +![img](../../assets/TR21.png) ### Description @@ -550,7 +550,7 @@ Vehicle exits the highway traffic without any traffic violation ## 10. Yield to emergency vehicle -![img](../../00_assets/TR23.png) +![img](../../assets/TR23.png) ### Description @@ -590,7 +590,7 @@ Let emergency vehicle pass without any traffic violation ## 11. Obstacle in lane -![img](../../00_assets/TR14.png) +![img](../../assets/TR14.png) ### Description @@ -645,7 +645,7 @@ Pass an obstacle without any traffic violation ## 12. Door Obstacle -![img](../../00_assets/TR15.png) +![img](../../assets/TR15.png) ### Description @@ -694,7 +694,7 @@ Pass the open door without any traffic violation ## 13. Slow moving hazard at lane edge -![img](../../00_assets/TR16.png) +![img](../../assets/TR16.png) ### Description @@ -743,7 +743,7 @@ Pass the slow moving hazard without any traffic violation ## 14. Vehicle invading lane on bend -![img](../../00_assets/TR22.png) +![img](../../assets/TR22.png) ### Description @@ -779,7 +779,7 @@ None ## 15. Longitudinal control after leading vehicle brakes -![img](../../00_assets/TR02.png) +![img](../../assets/TR02.png) ### Description @@ -822,7 +822,7 @@ Slow down without crashing in vehicle in front of us ## 16. Obstacle avoidance without prior action -![img](../../00_assets/TR03.png) +![img](../../assets/TR03.png) ### Description @@ -873,7 +873,7 @@ Slow down without crashing in the obstacle in front of us ## 17. Pedestrian emerging from behind parked vehicle -![img](../../00_assets/TR17.png) +![img](../../assets/TR17.png) ### Description @@ -916,7 +916,7 @@ Slow down without crashing into the pedestrian in front of us ## 18. Obstacle avoidance with prior action -![img](../../00_assets/TR04.png) +![img](../../assets/TR04.png) ### Description @@ -955,7 +955,7 @@ Slow down without crashing into the obstacle in our path ## 19. Parking Cut-in -![img](../../00_assets/TR12.png) +![img](../../assets/TR12.png) ### Description @@ -992,7 +992,7 @@ Slow down without crashing into the car joining our lane ## 20. Lane changing to evade slow leading vehicle -![img](../../00_assets/TR05.png) +![img](../../assets/TR05.png) ### Description @@ -1039,7 +1039,7 @@ Change lane without any traffic violations ## 21. Passing obstacle with oncoming traffic -![img](../../00_assets/TR06.png) +![img](../../assets/TR06.png) ### Description @@ -1092,7 +1092,7 @@ Maneuver around obstacle without any traffic violations ## 22. Parking Exit -![img](../../00_assets/TR11.png) +![img](../../assets/TR11.png) ### Description From 2b1dc93b616a173a3d2f785d1254cb0bf4622f7f Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 15:28:45 +0200 Subject: [PATCH 19/28] Fixed uppercase for README files --- README.md | 6 +- code/acting/readme.md | 67 ------------------- doc/acting/{Readme.md => README.md} | 0 doc/development/{Readme.md => README.md} | 2 +- doc/development/dvc.md | 2 +- doc/general/{Readme.md => README.md} | 0 doc/general/architecture.md | 10 +-- doc/general/installation.md | 4 ++ doc/perception/{Readme.md => README.md} | 0 doc/research/README.md | 10 +++ doc/research/Readme.md | 10 --- doc/research/acting/{Readme.md => README.md} | 0 .../perception/{Readme.md => README.md} | 0 .../planning/{Readme.md => README.md} | 0 .../requirements/{Readme.md => README.md} | 0 15 files changed, 24 insertions(+), 87 deletions(-) delete mode 100644 code/acting/readme.md rename doc/acting/{Readme.md => README.md} (100%) rename doc/development/{Readme.md => README.md} (96%) rename doc/general/{Readme.md => README.md} (100%) rename doc/perception/{Readme.md => README.md} (100%) create mode 100644 doc/research/README.md delete mode 100644 doc/research/Readme.md rename doc/research/acting/{Readme.md => README.md} (100%) rename doc/research/perception/{Readme.md => README.md} (100%) rename doc/research/planning/{Readme.md => README.md} (100%) rename doc/research/requirements/{Readme.md => README.md} (100%) diff --git a/README.md b/README.md index 03c77c38..f217afd6 100644 --- a/README.md +++ b/README.md @@ -27,12 +27,12 @@ To run the project you have to install [docker](https://docs.docker.com/engine/i [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). `docker` and `nvidia-docker` are used to run the project in a containerized environment with GPU support. -More detailed instructions about setup and execution can be found [here](./doc/general/Readme.md). +More detailed instructions about setup and execution can be found [here](./doc/general/README.md). ## Development -If you contribute to this project please read the guidelines first. They can be found [here](./doc/development/Readme.md). +If you contribute to this project please read the guidelines first. They can be found [here](./doc/development/README.md). ## Research -The research on existing projects we did can be found [here](./doc/research/Readme.md). +The research on existing projects we did can be found [here](./doc/research/README.md). diff --git a/code/acting/readme.md b/code/acting/readme.md deleted file mode 100644 index 39c73fbc..00000000 --- a/code/acting/readme.md +++ /dev/null @@ -1,67 +0,0 @@ -# Acting - -**Summary:** This package contains all functions implemented for the acting component. - ---- - -## Authors - -Alexander Hellmann - -## Date - -01.04.2024 - ---- - - -- [Acting](#acting) - - [Authors](#authors) - - [Date](#date) - - [Acting Documentation](#acting-documentation) - - [Test/Debug/Tune Acting-Components](#testdebugtune-acting-components) - - [Longitudinal controllers (Velocity Controller)](#longitudinal-controllers-velocity-controller) - - [Lateral controllers (Steering Controllers)](#lateral-controllers-steering-controllers) - - [Vehicle controller](#vehicle-controller) - - [Visualization of the HeroFrame in rviz](#visualization-of-the-heroframe-in-rviz) - - -## Acting Documentation - -In order to further understand the general idea of the taken approach to the acting component please refer to the documentation of the [research](../../doc/research/acting/Readme.md) done and see the planned [general definition](../../doc/general/architecture.md#acting). - -It is also highly recommended to go through the indepth [Acting-Documentation](../../doc/acting/Readme.md)! - -## Test/Debug/Tune Acting-Components - -The Acting_Debug_Node can be used as a simulated Planning package, publishing adjustable target velocities, steerings and trajectories as needed. - -For more information about this node and how to use it, please read the [documentation](../../doc/acting/acting_testing.md). -You can also find more information in the commented [code](./src/acting/Acting_Debug_Node.py). - -## Longitudinal controllers (Velocity Controller) - -The longitudinal controller is implemented as a PID velocity controller. - -For more information about this controller, either read the [documentation](../../doc/acting/velocity_controller.md) or go through the commented [code](./src/acting/velocity_controller.py). - -## Lateral controllers (Steering Controllers) - -There are two steering controllers currently implemented, both providing live telemetry via Debug-Messages: - -- Pure Persuit Controller (paf/hero/pure_p_debug) -- Stanley Controller (paf/hero/stanley_debug) - -For further information about the steering controllers, either read the [documentation](./../../doc/acting/steering_controllers.md) or go through the commented code of [stanley_controller](./src/acting/stanley_controller.py) or [purepursuit_controller](./src/acting/pure_pursuit_controller.py). - -## Vehicle controller - -The VehicleController collects all necessary msgs from the other controllers and publishes the [CarlaEgoVehicleControl](https://carla.readthedocs.io/en/0.9.8/ros_msgs/#carlaegovehiclecontrol) for the [Carla ros bridge](https://github.com/carla-simulator/ros-bridge). - -It also executes emergency-brakes and the unstuck-routine, if detected. - -For more information about this controller, either read the [documentation](../../doc/acting/vehicle_controller.md) or go through the commented [code](./src/acting/vehicle_controller.py). - -## Visualization of the HeroFrame in rviz - -For information about vizualizing the upcomming path in rviz see [Main frame publisher](../../doc/acting/main_frame_publisher.md) diff --git a/doc/acting/Readme.md b/doc/acting/README.md similarity index 100% rename from doc/acting/Readme.md rename to doc/acting/README.md diff --git a/doc/development/Readme.md b/doc/development/README.md similarity index 96% rename from doc/development/Readme.md rename to doc/development/README.md index 0d4e7bde..3cee2abe 100644 --- a/doc/development/Readme.md +++ b/doc/development/README.md @@ -30,7 +30,7 @@ If you just want to copy an empty class use this class. ### [`template_component_readme.md`](./templates/template_component_readme.md) -This template functions a template for who to describe a component. IT should be contained in every component as `Readme.md`. +This template functions a template for who to describe a component. IT should be contained in every component as `README.md`. ### [`template_wiki_page.md`](./templates/template_wiki_page.md) diff --git a/doc/development/dvc.md b/doc/development/dvc.md index acd8462d..52491342 100644 --- a/doc/development/dvc.md +++ b/doc/development/dvc.md @@ -162,7 +162,7 @@ navigate among them and commit only the ones that we need to Git." [(Source)](ht Detailed documentation with a [good example](https://github.com/iterative/example-dvc-experiments) can be found [here](https://dvc.org/doc/start/experiment-management/experiments). -A working experiment in this project can be found [here](../../code/perception/src/traffic_light_detection/Readme.md). +A working experiment in this project can be found [here](../../code/perception/src/traffic_light_detection/README.md). #### Setup a new experiment diff --git a/doc/general/Readme.md b/doc/general/README.md similarity index 100% rename from doc/general/Readme.md rename to doc/general/README.md diff --git a/doc/general/architecture.md b/doc/general/architecture.md index 6f95c996..7451a4cf 100644 --- a/doc/general/architecture.md +++ b/doc/general/architecture.md @@ -55,8 +55,8 @@ The miro-board can be found [here](https://miro.com/welcomeonboard/a1F0d1dya2Fne The perception is responsible for the efficient conversion of raw sensor and map data into a useful environment representation that can be used by the [Planning](#Planning) for further processing. -Further information regarding the perception can be found [here](../perception/Readme.md). -Research for the perception can be found [here](../research/perception/Readme.md). +Further information regarding the perception can be found [here](../perception/README.md). +Research for the perception can be found [here](../research/perception/README.md). ### Obstacle Detection and Classification @@ -121,7 +121,7 @@ its destination. It also detects situations and reacts accordingly in traffic. I speed to acting. Further information regarding the planning can be found [here](../planning/README.md). -Research for the planning can be found [here](../research/planning/Readme.md). +Research for the planning can be found [here](../research/planning/README.md). ### [Global Planning](../planning/Global_Planner.md) @@ -225,9 +225,9 @@ Publishes: The job of this component is to take the planned trajectory and target-velocities from the [Planning](#Planning) component and convert them into steering and throttle/brake controls for the CARLA-vehicle. -All information regarding research done about acting can be found [here](../research/acting/Readme.md). +All information regarding research done about acting can be found [here](../research/acting/README.md). -Indepth information about the currently implemented acting Components can be found [HERE](../acting/Readme.md)! +Indepth information about the currently implemented acting Components can be found [HERE](../acting/README.md)! ### Path following with Steering Controllers diff --git a/doc/general/installation.md b/doc/general/installation.md index c3620c60..e8ac3b26 100644 --- a/doc/general/installation.md +++ b/doc/general/installation.md @@ -62,6 +62,10 @@ Restart the Docker daemon to complete the installation after setting the default sudo systemctl restart docker ``` +## VS Code Extensions + +The repository comes with a suite of recommended VS Code extensions. Install them via the `Extensions` tab inside VS Code. + ## 🚨 Common Problems ### Vulkan device not available diff --git a/doc/perception/Readme.md b/doc/perception/README.md similarity index 100% rename from doc/perception/Readme.md rename to doc/perception/README.md diff --git a/doc/research/README.md b/doc/research/README.md new file mode 100644 index 00000000..9367f439 --- /dev/null +++ b/doc/research/README.md @@ -0,0 +1,10 @@ +# Research + +This folder contains every research we did before we started the project. + +The research is structured in the following folders: + +- [Acting](./acting/README.md) +- [Perception](./perception/README.md) +- [Planning](./planning/README.md) +- [Requirements](./requirements/README.md) diff --git a/doc/research/Readme.md b/doc/research/Readme.md deleted file mode 100644 index 179f8c8b..00000000 --- a/doc/research/Readme.md +++ /dev/null @@ -1,10 +0,0 @@ -# Research - -This folder contains every research we did before we started the project. - -The research is structured in the following folders: - -- [Acting](./acting/Readme.md) -- [Perception](./perception/Readme.md) -- [Planning](./planning/Readme.md) -- [Requirements](./requirements/Readme.md) diff --git a/doc/research/acting/Readme.md b/doc/research/acting/README.md similarity index 100% rename from doc/research/acting/Readme.md rename to doc/research/acting/README.md diff --git a/doc/research/perception/Readme.md b/doc/research/perception/README.md similarity index 100% rename from doc/research/perception/Readme.md rename to doc/research/perception/README.md diff --git a/doc/research/planning/Readme.md b/doc/research/planning/README.md similarity index 100% rename from doc/research/planning/Readme.md rename to doc/research/planning/README.md diff --git a/doc/research/requirements/Readme.md b/doc/research/requirements/README.md similarity index 100% rename from doc/research/requirements/Readme.md rename to doc/research/requirements/README.md From 4de3c97349a8d5b5bb5ef903733e19c745cb9b15 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Wed, 9 Oct 2024 16:06:59 +0200 Subject: [PATCH 20/28] Refactored research docs --- doc/README.md | 50 +++++++++++++++++++ doc/research/README.md | 10 ++-- doc/research/acting/README.md | 11 ---- .../{ => paf22}/acting/basics_acting.md | 0 .../acting/implementation_acting.md | 0 doc/research/{ => paf22}/perception/basics.md | 0 .../perception/first_implementation_plan.md | 0 .../planning}/Implementation.md | 0 .../planning}/Navigation_Data.md | 0 .../paf22 => paf22/planning}/OpenDrive.md | 0 .../paf22 => paf22/planning}/basics.md | 0 .../planning}/decision_making.md | 0 .../planning}/reevaluation_desicion_making.md | 0 .../planning}/state_machine_design.md | 0 .../{ => paf22}/requirements/README.md | 0 .../informations_from_leaderboard.md | 0 .../{ => paf22}/requirements/requirements.md | 0 .../{ => paf22}/requirements/use_cases.md | 0 .../{ => paf23}/acting/autoware_acting.md | 0 .../{ => paf23}/acting/paf21_1_acting.md | 0 .../acting/paf21_2_and_pylot_acting.md | 0 .../leaderboard/changes_leaderboard.md} | 0 .../{ => paf23}/perception/LIDAR_data.md | 0 .../perception/Research_PAF21-Perception.md | 0 .../perception/autoware-perception.md | 0 .../perception/paf_21_1_perception.md | 0 doc/research/{ => paf23}/perception/pylot.md | 0 .../Local_planning_for_first_milestone.md | 0 .../planning}/PlannedArchitecture.md | 0 .../paf23 => paf23/planning}/Planning.md | 0 .../paf23 => paf23/planning}/PlanningPaf22.md | 0 .../planning}/Research_Pylot_Planning.md | 0 .../Testing_frenet_trajectory_planner.md | 0 .../paf23 => paf23/planning}/paf21-1.md | 0 .../paf23 => paf23/planning}/test_traj.py | 0 doc/research/perception/README.md | 12 ----- doc/research/planning/README.md | 7 --- 37 files changed, 54 insertions(+), 36 deletions(-) create mode 100644 doc/README.md delete mode 100644 doc/research/acting/README.md rename doc/research/{ => paf22}/acting/basics_acting.md (100%) rename doc/research/{ => paf22}/acting/implementation_acting.md (100%) rename doc/research/{ => paf22}/perception/basics.md (100%) rename doc/research/{ => paf22}/perception/first_implementation_plan.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/Implementation.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/Navigation_Data.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/OpenDrive.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/basics.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/decision_making.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/reevaluation_desicion_making.md (100%) rename doc/research/{planning/paf22 => paf22/planning}/state_machine_design.md (100%) rename doc/research/{ => paf22}/requirements/README.md (100%) rename doc/research/{ => paf22}/requirements/informations_from_leaderboard.md (100%) rename doc/research/{ => paf22}/requirements/requirements.md (100%) rename doc/research/{ => paf22}/requirements/use_cases.md (100%) rename doc/research/{ => paf23}/acting/autoware_acting.md (100%) rename doc/research/{ => paf23}/acting/paf21_1_acting.md (100%) rename doc/research/{ => paf23}/acting/paf21_2_and_pylot_acting.md (100%) rename doc/research/{Leaderboard-2/changes_leaderboard2.md => paf23/leaderboard/changes_leaderboard.md} (100%) rename doc/research/{ => paf23}/perception/LIDAR_data.md (100%) rename doc/research/{ => paf23}/perception/Research_PAF21-Perception.md (100%) rename doc/research/{ => paf23}/perception/autoware-perception.md (100%) rename doc/research/{ => paf23}/perception/paf_21_1_perception.md (100%) rename doc/research/{ => paf23}/perception/pylot.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/Local_planning_for_first_milestone.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/PlannedArchitecture.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/Planning.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/PlanningPaf22.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/Research_Pylot_Planning.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/Testing_frenet_trajectory_planner.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/paf21-1.md (100%) rename doc/research/{planning/paf23 => paf23/planning}/test_traj.py (100%) delete mode 100644 doc/research/perception/README.md delete mode 100644 doc/research/planning/README.md diff --git a/doc/README.md b/doc/README.md new file mode 100644 index 00000000..7546ccb8 --- /dev/null +++ b/doc/README.md @@ -0,0 +1,50 @@ +# PAF Documentation + +This document provides an overview of the structure of the documentation. + +- [PAF Documentation](#paf-documentation) + - [`general`](#general) + - [`development`](#development) + - [`research`](#research) + - [`examples`](#examples) + - [`perception`](#perception) + - [`planning`](#planning) + - [`acting`](#acting) + - [`assets`](#assets) + - [`dev_talks`](#dev_talks) + +## `general` + +The [`general`](./general/) folder contains installation instructions for the project and an overview of the system architecture. + +## `development` + +The [`development`](./development/) folder contains guidelines for developing inside the project. It also provides templates for documentation files and python classes. Further information can be found in the [README](development/README.md). + +## `research` + +The [`research`](./research/) folder contains the findings of each group during the initial phase of the project. + +## `examples` + +To-do + +## `perception` + +The [`perception`](./perception/) folder contains documentation for the whole perception module and its individual components. + +## `planning` + +The [`planning`](./planning/) folder contains documentation for the whole planning module and its individual components. + +## `acting` + +The [`acting`](./acting/) folder contains documentation for the whole acting module and its individual components. + +## `assets` + +The [`assets`](./assets/) folder contains mainly images that are used inside the documentation. + +## `dev_talks` + +The [`dev_talks`](./dev_talks/) folder contains the protocols of each sprint review and roles that the students fill during the project. diff --git a/doc/research/README.md b/doc/research/README.md index 9367f439..f4703cc3 100644 --- a/doc/research/README.md +++ b/doc/research/README.md @@ -1,10 +1,8 @@ # Research -This folder contains every research we did before we started the project. +This folder contains the research of each individual group at the start of the project. -The research is structured in the following folders: +The research is structured in folders for each year: -- [Acting](./acting/README.md) -- [Perception](./perception/README.md) -- [Planning](./planning/README.md) -- [Requirements](./requirements/README.md) +- [PAF22](./paf22/) +- [PAF23](./paf23/) diff --git a/doc/research/acting/README.md b/doc/research/acting/README.md deleted file mode 100644 index 8d84f895..00000000 --- a/doc/research/acting/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Acting - -This folder contains all the results of our research on acting: - -- **PAF22** -- [Basics](./basics_acting.md) -- [Implementation](./implementation_acting.md) -- **PAF23** -- [PAF21_1 Acting](./paf21_1_acting.md) -- [PAF21_2 Acting & Pylot Control](./paf21_2_and_pylot_acting.md) -- [Autoware Control](./autoware_acting.md) diff --git a/doc/research/acting/basics_acting.md b/doc/research/paf22/acting/basics_acting.md similarity index 100% rename from doc/research/acting/basics_acting.md rename to doc/research/paf22/acting/basics_acting.md diff --git a/doc/research/acting/implementation_acting.md b/doc/research/paf22/acting/implementation_acting.md similarity index 100% rename from doc/research/acting/implementation_acting.md rename to doc/research/paf22/acting/implementation_acting.md diff --git a/doc/research/perception/basics.md b/doc/research/paf22/perception/basics.md similarity index 100% rename from doc/research/perception/basics.md rename to doc/research/paf22/perception/basics.md diff --git a/doc/research/perception/first_implementation_plan.md b/doc/research/paf22/perception/first_implementation_plan.md similarity index 100% rename from doc/research/perception/first_implementation_plan.md rename to doc/research/paf22/perception/first_implementation_plan.md diff --git a/doc/research/planning/paf22/Implementation.md b/doc/research/paf22/planning/Implementation.md similarity index 100% rename from doc/research/planning/paf22/Implementation.md rename to doc/research/paf22/planning/Implementation.md diff --git a/doc/research/planning/paf22/Navigation_Data.md b/doc/research/paf22/planning/Navigation_Data.md similarity index 100% rename from doc/research/planning/paf22/Navigation_Data.md rename to doc/research/paf22/planning/Navigation_Data.md diff --git a/doc/research/planning/paf22/OpenDrive.md b/doc/research/paf22/planning/OpenDrive.md similarity index 100% rename from doc/research/planning/paf22/OpenDrive.md rename to doc/research/paf22/planning/OpenDrive.md diff --git a/doc/research/planning/paf22/basics.md b/doc/research/paf22/planning/basics.md similarity index 100% rename from doc/research/planning/paf22/basics.md rename to doc/research/paf22/planning/basics.md diff --git a/doc/research/planning/paf22/decision_making.md b/doc/research/paf22/planning/decision_making.md similarity index 100% rename from doc/research/planning/paf22/decision_making.md rename to doc/research/paf22/planning/decision_making.md diff --git a/doc/research/planning/paf22/reevaluation_desicion_making.md b/doc/research/paf22/planning/reevaluation_desicion_making.md similarity index 100% rename from doc/research/planning/paf22/reevaluation_desicion_making.md rename to doc/research/paf22/planning/reevaluation_desicion_making.md diff --git a/doc/research/planning/paf22/state_machine_design.md b/doc/research/paf22/planning/state_machine_design.md similarity index 100% rename from doc/research/planning/paf22/state_machine_design.md rename to doc/research/paf22/planning/state_machine_design.md diff --git a/doc/research/requirements/README.md b/doc/research/paf22/requirements/README.md similarity index 100% rename from doc/research/requirements/README.md rename to doc/research/paf22/requirements/README.md diff --git a/doc/research/requirements/informations_from_leaderboard.md b/doc/research/paf22/requirements/informations_from_leaderboard.md similarity index 100% rename from doc/research/requirements/informations_from_leaderboard.md rename to doc/research/paf22/requirements/informations_from_leaderboard.md diff --git a/doc/research/requirements/requirements.md b/doc/research/paf22/requirements/requirements.md similarity index 100% rename from doc/research/requirements/requirements.md rename to doc/research/paf22/requirements/requirements.md diff --git a/doc/research/requirements/use_cases.md b/doc/research/paf22/requirements/use_cases.md similarity index 100% rename from doc/research/requirements/use_cases.md rename to doc/research/paf22/requirements/use_cases.md diff --git a/doc/research/acting/autoware_acting.md b/doc/research/paf23/acting/autoware_acting.md similarity index 100% rename from doc/research/acting/autoware_acting.md rename to doc/research/paf23/acting/autoware_acting.md diff --git a/doc/research/acting/paf21_1_acting.md b/doc/research/paf23/acting/paf21_1_acting.md similarity index 100% rename from doc/research/acting/paf21_1_acting.md rename to doc/research/paf23/acting/paf21_1_acting.md diff --git a/doc/research/acting/paf21_2_and_pylot_acting.md b/doc/research/paf23/acting/paf21_2_and_pylot_acting.md similarity index 100% rename from doc/research/acting/paf21_2_and_pylot_acting.md rename to doc/research/paf23/acting/paf21_2_and_pylot_acting.md diff --git a/doc/research/Leaderboard-2/changes_leaderboard2.md b/doc/research/paf23/leaderboard/changes_leaderboard.md similarity index 100% rename from doc/research/Leaderboard-2/changes_leaderboard2.md rename to doc/research/paf23/leaderboard/changes_leaderboard.md diff --git a/doc/research/perception/LIDAR_data.md b/doc/research/paf23/perception/LIDAR_data.md similarity index 100% rename from doc/research/perception/LIDAR_data.md rename to doc/research/paf23/perception/LIDAR_data.md diff --git a/doc/research/perception/Research_PAF21-Perception.md b/doc/research/paf23/perception/Research_PAF21-Perception.md similarity index 100% rename from doc/research/perception/Research_PAF21-Perception.md rename to doc/research/paf23/perception/Research_PAF21-Perception.md diff --git a/doc/research/perception/autoware-perception.md b/doc/research/paf23/perception/autoware-perception.md similarity index 100% rename from doc/research/perception/autoware-perception.md rename to doc/research/paf23/perception/autoware-perception.md diff --git a/doc/research/perception/paf_21_1_perception.md b/doc/research/paf23/perception/paf_21_1_perception.md similarity index 100% rename from doc/research/perception/paf_21_1_perception.md rename to doc/research/paf23/perception/paf_21_1_perception.md diff --git a/doc/research/perception/pylot.md b/doc/research/paf23/perception/pylot.md similarity index 100% rename from doc/research/perception/pylot.md rename to doc/research/paf23/perception/pylot.md diff --git a/doc/research/planning/paf23/Local_planning_for_first_milestone.md b/doc/research/paf23/planning/Local_planning_for_first_milestone.md similarity index 100% rename from doc/research/planning/paf23/Local_planning_for_first_milestone.md rename to doc/research/paf23/planning/Local_planning_for_first_milestone.md diff --git a/doc/research/planning/paf23/PlannedArchitecture.md b/doc/research/paf23/planning/PlannedArchitecture.md similarity index 100% rename from doc/research/planning/paf23/PlannedArchitecture.md rename to doc/research/paf23/planning/PlannedArchitecture.md diff --git a/doc/research/planning/paf23/Planning.md b/doc/research/paf23/planning/Planning.md similarity index 100% rename from doc/research/planning/paf23/Planning.md rename to doc/research/paf23/planning/Planning.md diff --git a/doc/research/planning/paf23/PlanningPaf22.md b/doc/research/paf23/planning/PlanningPaf22.md similarity index 100% rename from doc/research/planning/paf23/PlanningPaf22.md rename to doc/research/paf23/planning/PlanningPaf22.md diff --git a/doc/research/planning/paf23/Research_Pylot_Planning.md b/doc/research/paf23/planning/Research_Pylot_Planning.md similarity index 100% rename from doc/research/planning/paf23/Research_Pylot_Planning.md rename to doc/research/paf23/planning/Research_Pylot_Planning.md diff --git a/doc/research/planning/paf23/Testing_frenet_trajectory_planner.md b/doc/research/paf23/planning/Testing_frenet_trajectory_planner.md similarity index 100% rename from doc/research/planning/paf23/Testing_frenet_trajectory_planner.md rename to doc/research/paf23/planning/Testing_frenet_trajectory_planner.md diff --git a/doc/research/planning/paf23/paf21-1.md b/doc/research/paf23/planning/paf21-1.md similarity index 100% rename from doc/research/planning/paf23/paf21-1.md rename to doc/research/paf23/planning/paf21-1.md diff --git a/doc/research/planning/paf23/test_traj.py b/doc/research/paf23/planning/test_traj.py similarity index 100% rename from doc/research/planning/paf23/test_traj.py rename to doc/research/paf23/planning/test_traj.py diff --git a/doc/research/perception/README.md b/doc/research/perception/README.md deleted file mode 100644 index 8e6c5108..00000000 --- a/doc/research/perception/README.md +++ /dev/null @@ -1,12 +0,0 @@ -# Perception - -This folder contains all the results of research on perception: - -- **PAF22** - - [Basics](./basics.md) - - [First implementation plan](./first_implementation_plan.md) -- **PAF23** - - [Pylot Perception](./pylot.md) - - [PAF_21_2 Perception](./Research_PAF21-Perception.md) - - [PAF_21_1_Perception](./paf_21_1_perception.md) -- [Autoware Perception](./autoware-perception.md) diff --git a/doc/research/planning/README.md b/doc/research/planning/README.md deleted file mode 100644 index 7ab5f590..00000000 --- a/doc/research/planning/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Planning - -This folder contains all the results of research on planning from PAF 23 and 22. -The research documents from the previous project were kept as they contain helpful information. The documents are separated in different folders: - -- **[PAF22](./paf22/)** -- **[PAF23](./paf23/)** From eacde1120fb90701b4ed304601f7d23d122780e6 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Thu, 10 Oct 2024 10:56:26 +0200 Subject: [PATCH 21/28] Refactord docs to wiki page template --- doc/acting/acting_testing.md | 14 -- doc/acting/architecture_documentation.md | 12 -- doc/acting/main_frame_publisher.md | 14 -- doc/acting/steering_controllers.md | 14 -- doc/acting/vehicle_controller.md | 14 -- doc/acting/velocity_controller.md | 14 -- doc/development/build_action.md | 26 +-- doc/development/coding_style.md | 23 --- doc/development/discord_webhook.md | 4 +- doc/development/distributed_simulation.md | 14 +- doc/development/documentation_requirements.md | 107 +++++++----- doc/development/dvc.md | 16 -- doc/development/git_workflow.md | 18 -- doc/development/installing_cuda.md | 13 +- doc/development/installing_python_packages.md | 11 +- doc/development/linter_action.md | 18 +- doc/development/linting.md | 7 +- doc/development/project_management.md | 21 --- doc/development/review_guideline.md | 20 --- .../templates/template_component_readme.md | 27 --- .../templates/template_wiki_page.md | 162 +----------------- .../templates/template_wiki_page_empty.md | 39 ----- .../gps_example/gps_signal_example.md | 15 -- doc/general/architecture.md | 15 -- doc/general/installation.md | 12 ++ doc/perception/coordinate_transformation.md | 16 -- doc/perception/dataset_generator.md | 16 -- doc/perception/dataset_structure.md | 14 -- doc/perception/distance_to_objects.md | 13 +- doc/perception/efficientps.md | 14 -- doc/perception/kalman_filter.md | 20 --- doc/perception/lidar_distance_utility.md | 15 -- .../position_heading_filter_debug_node.md | 20 --- .../position_heading_publisher_node.md | 15 -- doc/perception/traffic_light_detection.md | 13 ++ doc/perception/vision_node.md | 9 +- doc/planning/ACC.md | 12 +- doc/planning/Behavior_tree.md | 18 -- doc/planning/Collision_Check.md | 13 +- doc/planning/Global_Planner.md | 20 --- doc/planning/Local_Planning.md | 19 +- doc/planning/Preplanning.md | 22 --- doc/planning/README.md | 18 -- doc/planning/Unstuck_Behavior.md | 15 -- doc/planning/motion_planning.md | 20 --- doc/planning/py_trees.md | 21 +-- doc/research/paf22/acting/basics_acting.md | 28 +-- .../paf22/acting/implementation_acting.md | 18 +- doc/research/paf22/perception/basics.md | 20 ++- .../perception/first_implementation_plan.md | 19 +- doc/research/paf22/planning/Implementation.md | 20 +-- .../paf22/planning/Navigation_Data.md | 15 -- doc/research/paf22/planning/OpenDrive.md | 16 -- doc/research/paf22/planning/basics.md | 13 +- .../paf22/planning/decision_making.md | 18 -- .../planning/reevaluation_desicion_making.md | 19 +- .../paf22/planning/state_machine_design.md | 15 -- .../informations_from_leaderboard.md | 23 --- .../paf22/requirements/requirements.md | 18 -- doc/research/paf22/requirements/use_cases.md | 20 --- doc/research/paf23/acting/autoware_acting.md | 10 ++ doc/research/paf23/acting/paf21_1_acting.md | 10 ++ .../paf23/acting/paf21_2_and_pylot_acting.md | 30 +++- .../paf23/leaderboard/changes_leaderboard.md | 15 +- doc/research/paf23/perception/LIDAR_data.md | 8 +- .../perception/Research_PAF21-Perception.md | 15 +- .../paf23/perception/autoware-perception.md | 7 + .../paf23/perception/paf_21_1_perception.md | 10 ++ doc/research/paf23/perception/pylot.md | 16 +- .../Local_planning_for_first_milestone.md | 12 +- .../paf23/planning/PlannedArchitecture.md | 11 +- doc/research/paf23/planning/Planning.md | 9 + doc/research/paf23/planning/PlanningPaf22.md | 14 ++ .../paf23/planning/Research_Pylot_Planning.md | 5 +- .../Testing_frenet_trajectory_planner.md | 14 +- doc/research/paf23/planning/paf21-1.md | 12 +- 76 files changed, 350 insertions(+), 1103 deletions(-) delete mode 100644 doc/development/templates/template_component_readme.md delete mode 100644 doc/development/templates/template_wiki_page_empty.md diff --git a/doc/acting/acting_testing.md b/doc/acting/acting_testing.md index fafc00c3..a5a67791 100644 --- a/doc/acting/acting_testing.md +++ b/doc/acting/acting_testing.md @@ -2,24 +2,10 @@ **Summary:** This page shows ways to test and tune acting components and to verify that they work as intended. ---- - -## Author - -Alexander Hellmann - -## Date - -01.04.2024 - - - [How to test/tune acting components independedly](#how-to-testtune-acting-components-independedly) - - [Author](#author) - - [Date](#date) - [Acting\_Debug\_Node](#acting_debug_node) - [Setup for Testing with the Debug-Node](#setup-for-testing-with-the-debug-node) - [Operating the Debug-Node](#operating-the-debug-node) - ## Acting_Debug_Node diff --git a/doc/acting/architecture_documentation.md b/doc/acting/architecture_documentation.md index 6eefb3cd..5c1aeb9d 100644 --- a/doc/acting/architecture_documentation.md +++ b/doc/acting/architecture_documentation.md @@ -2,18 +2,7 @@ **Summary**: This documentation shows the current Acting Architecture. -## Authors - -Alexander Hellmann - -## Date - -01.04.2024 - - - [Architecture](#architecture) - - [Authors](#authors) - - [Date](#date) - [Acting Architecture](#acting-architecture) - [Summary of Acting Components](#summary-of-acting-components) - [pure\_pursuit\_controller.py](#pure_pursuit_controllerpy) @@ -22,7 +11,6 @@ Alexander Hellmann - [vehicle\_controller.py](#vehicle_controllerpy) - [helper\_functions.py](#helper_functionspy) - [MainFramePublisher.py](#mainframepublisherpy) - ## Acting Architecture diff --git a/doc/acting/main_frame_publisher.md b/doc/acting/main_frame_publisher.md index 0a12bfc6..706e9c2e 100644 --- a/doc/acting/main_frame_publisher.md +++ b/doc/acting/main_frame_publisher.md @@ -2,24 +2,10 @@ **Summary:** This page informs about the main frame publisher ---- - -## Author - -Julian Graf - -## Date - -29.03.2023 - - - [Main frame publisher](#main-frame-publisher) - - [Author](#author) - - [Date](#date) - [Overview: Main frame publisher](#overview-main-frame-publisher) - [How to use](#how-to-use) - [Known issues](#known-issues) - ## Overview: Main frame publisher diff --git a/doc/acting/steering_controllers.md b/doc/acting/steering_controllers.md index a15a3c02..e7c0a474 100644 --- a/doc/acting/steering_controllers.md +++ b/doc/acting/steering_controllers.md @@ -2,24 +2,10 @@ **Summary:** This page provides an overview of the current status of both steering controllers, the PurePursuit and the Stanley Controller. ---- - -## Author - -Alexander Hellmann - -## Date - -01.04.2024 - - - [Overview of the Steering Controllers](#overview-of-the-steering-controllers) - - [Author](#author) - - [Date](#date) - [General Introduction to Steering Controllers](#general-introduction-to-steering-controllers) - [PurePursuit Controller](#purepursuit-controller) - [Stanley Controller](#stanley-controller) - ## General Introduction to Steering Controllers diff --git a/doc/acting/vehicle_controller.md b/doc/acting/vehicle_controller.md index b8a2d2a3..62dfefe7 100644 --- a/doc/acting/vehicle_controller.md +++ b/doc/acting/vehicle_controller.md @@ -2,25 +2,11 @@ **Summary:** This page provides an overview of the current status of the Vehicle Controller Component. ---- - -## Authors - -Robert Fischer, Alexander Hellmann - -## Date - -01.04.2024 - - - [Overview of the Vehicle Controller Component](#overview-of-the-vehicle-controller-component) - - [Authors](#authors) - - [Date](#date) - [General Introduction to the Vehicle Controller Component](#general-introduction-to-the-vehicle-controller-component) - [Vehicle Controller Output](#vehicle-controller-output) - [Emergency Brake](#emergency-brake) - [Unstuck Routine](#unstuck-routine) - ## General Introduction to the Vehicle Controller Component diff --git a/doc/acting/velocity_controller.md b/doc/acting/velocity_controller.md index 9d8bbe3d..b4715997 100644 --- a/doc/acting/velocity_controller.md +++ b/doc/acting/velocity_controller.md @@ -2,23 +2,9 @@ **Summary:** This page provides an overview of the current status of the velocity_controller. ---- - -## Author - -Alexander Hellmann - -## Date - -01.04.2024 - - - [Overview of the Velocity Controller](#overview-of-the-velocity-controller) - - [Author](#author) - - [Date](#date) - [General Introduction to Velocity Controller](#general-introduction-to-velocity-controller) - [Current Implementation](#current-implementation) - ## General Introduction to Velocity Controller diff --git a/doc/development/build_action.md b/doc/development/build_action.md index ec311b43..77f71011 100644 --- a/doc/development/build_action.md +++ b/doc/development/build_action.md @@ -7,23 +7,7 @@ - create an executable image of our work - evaluate our Agent with the leaderboard ---- - -## Authors - -Tim Dreier, Korbinian Stein - -## Date - -2.12.2022 - -## Table of contents - - - - [GitHub actions](#github-actions) - - [Authors](#authors) - - [Date](#date) - [Table of contents](#table-of-contents) - [General](#general) - [The Dockerfile (`build/docker/build/Dockerfile`)](#the-dockerfile-builddockerbuilddockerfile) @@ -42,16 +26,14 @@ Tim Dreier, Korbinian Stein - [5. Comment result in pull request `actions/github-script@v6`](#5-comment-result-in-pull-request-actionsgithub-scriptv6) - [Simulation results](#simulation-results) - - ## General The workflow defined in [`.github/workflows/build.yml`](../../.github/workflows/build.yml) creates an executable image which can later be submitted to the [CARLA leaderboard](https://leaderboard.carla.org) and pushes it to [GitHub Packages](ghcr.io). -The image can then be pulled with `docker pull ghcr.io/ll7/paf22:latest` to get the latest version -or `docker pull ghcr.io/ll7/paf22:` to get a specific version. +The image can then be pulled with `docker pull ghcr.io/una-auxme/paf:latest` to get the latest version +or `docker pull ghcr.io/una-auxme/paf:` to get a specific version. If action is triggered by a pull request the created image is then used to execute a test run in the leaderboard, using the devtest routes. The results of this simulation are then added as a comment to the pull request. @@ -109,10 +91,10 @@ Same step as in the [build job](#1-checkout-repository--actionscheckoutv3-) ### 2. Run agent with docker-compose -Runs the agent with the [`build/docker-compose.test.yml`](../../build/docker-compose.test.yml) that only contains the +Runs the agent with the [`build/docker-compose.cicd.yaml`](../../build/docker-compose.cicd.yaml) that only contains the bare minimum components for test execution: -- Carla Simulator (running in headless mode) +- Carla Simulator - roscore - Agent container, run through the Carla [`leaderboard_evaluator`](https://github.com/carla-simulator/leaderboard/blob/leaderboard-2.0/leaderboard/leaderboard_evaluator.py). diff --git a/doc/development/coding_style.md b/doc/development/coding_style.md index 390d6859..544c397f 100644 --- a/doc/development/coding_style.md +++ b/doc/development/coding_style.md @@ -4,28 +4,7 @@ **Summary:** This page contains the coding rules we want to follow as a team to improve readability and reviewing of our code.This document is for reference only and should be consolidated in case of uncertainty of following the style guidelines. Based on PEP 8 () ---- - -## Author - -Josef Kircher - -## Date - -04.11.2022 - -## Prerequisite - -VSCode Extensions: - -- autoDostring - Python Docstring Generator by Nils Werner - ---- - - [Coding style guidelines](#coding-style-guidelines) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Code lay-out](#code-lay-out) - [Indentation](#indentation) - [Tabs or Spaces?](#tabs-or-spaces) @@ -65,8 +44,6 @@ VSCode Extensions: - [Footnotes](#footnotes) - [Copyright](#copyright) - [Source](#source) - ---- ## Code lay-out diff --git a/doc/development/discord_webhook.md b/doc/development/discord_webhook.md index b93de316..08de569c 100644 --- a/doc/development/discord_webhook.md +++ b/doc/development/discord_webhook.md @@ -1,6 +1,8 @@ # Discord Webhook -Author: Lennart Luttkus, 15.11.2023 +**Summary**: This page explains the webhook that posts updates of the repository to Discord. + +- [Discord Webhook](#discord-webhook) The discord bot has access to the `#gitupdates` text channel on our discord server. It is an Integration as a Webhook. diff --git a/doc/development/distributed_simulation.md b/doc/development/distributed_simulation.md index 2d7cbe58..0dcf038e 100644 --- a/doc/development/distributed_simulation.md +++ b/doc/development/distributed_simulation.md @@ -3,13 +3,13 @@ If you have not enough compute resources, start the `carla-simulator-server` on a remote machine and execute the agent on your local machine. As far as we know, you need more than **10 GB of VRAM** to run the server and the agent on the same machine. -## Author - -Julian Trommer and Lennart Luttkus - -## Date - -2024-06-28 +- [Distributed Simulation](#distributed-simulation) + - [Remote Machine Setup](#remote-machine-setup) + - [Local Machine Setup](#local-machine-setup) + - [Ensure similarity between normal docker-compose and distributed docker-compose files](#ensure-similarity-between-normal-docker-compose-and-distributed-docker-compose-files) + - [Set the `` of the carla simulator in docker-compose distributed files](#set-the-ip-address-of-the-carla-simulator-in-docker-compose-distributed-files) + - [Start the agent on your local machine](#start-the-agent-on-your-local-machine) + - [How do you know that you do not have enough compute resources?](#how-do-you-know-that-you-do-not-have-enough-compute-resources) ## Remote Machine Setup diff --git a/doc/development/documentation_requirements.md b/doc/development/documentation_requirements.md index a1f1dcb6..f241d244 100644 --- a/doc/development/documentation_requirements.md +++ b/doc/development/documentation_requirements.md @@ -1,45 +1,66 @@ # Documentation Requirements -## Author - -Lennart Luttkus - -## Date - -08.03.2024 - ---- - -1. **Readability and Maintainability:** - - **Consistent Formatting:** Code should follow a consistent and readable formatting style. Tools like linters or formatters can help enforce a consistent code style. - - [linting](./linting.md) - - [coding_style](./coding_style.md) - - **Meaningful Names:** Variable and function names should be descriptive and convey the purpose of the code. - - **Comments:** Clear and concise comments should be used where necessary to explain complex logic or provide context. -2. **Code Structure:** - - **Modularity:** Code should be organized into modular components or functions, promoting reusability and maintainability. - - **Appropriate Use of Functions/Methods:** Functions should have a clear purpose and adhere to the single responsibility principle. - - **Hierarchy and Nesting:** Avoid overly nested structures; use appropriate levels of indentation to enhance readability. -3. **Efficiency and Performance:** - - **Optimized Algorithms:** Code should use efficient algorithms and data structures to achieve good performance. - - **Avoidance of Code Smells:** Detect and eliminate code smells such as duplicated code, unnecessary complexity, or anti-patterns. -4. **Error Handling:** - - **Effective Error Messages:** Error messages should be clear and provide useful information for debugging. - - **Graceful Error Handling:** Code should handle errors gracefully, avoiding crashes and providing appropriate feedback. -5. **Testing:**? - - **Comprehensive Test Coverage:** Code should be accompanied by a suite of tests that cover different scenarios, ensuring reliability and maintainability. - - **Test Readability:** Tests should be clear and easy to understand, serving as documentation for the codebase. -6. **Security:** - - **Input Validation:** Code should validate and sanitize inputs. -7. **Documentation:** - - **Code Comments:** In addition to in-code comments, consider external documentation for the overall project structure, APIs, and configurations. - - **README Files:** Include a well-written README file that provides an overview of the project, installation instructions, and usage examples. -8. **Version Control:** - - **Commit Messages:** Use descriptive and meaningful commit messages to track changes effectively. - - [commit](./commit.md) - - **Branching Strategy:** Follow a consistent and well-defined branching strategy to manage code changes. -9. **Scalability:** - - **Avoid Hardcoding:** Parameterize values that might change, making it easier to scale the application. - - **Optimized Resource Usage:** Ensure efficient utilization of resources to support scalability. -10. **Consistency with Coding Standards:** - - **Adherence to Coding Guidelines:** Follow established coding standards and best practices for the programming language or framework used. +- [Documentation Requirements](#documentation-requirements) + - [Readability and Maintainability](#readability-and-maintainability) + - [Code Structure](#code-structure) + - [Efficiency and Performance](#efficiency-and-performance) + - [Error Handling](#error-handling) + - [Testing](#testing) + - [Security](#security) + - [Documentation](#documentation) + - [Version Control](#version-control) + - [Scalability](#scalability) + - [Consistency with Coding Standards](#consistency-with-coding-standards) + +## Readability and Maintainability + +- **Consistent Formatting:** Code should follow a consistent and readable formatting style. Tools like linters or formatters can help enforce a consistent code style. + - [linting](./linting.md) + - [coding_style](./coding_style.md) +- **Meaningful Names:** Variable and function names should be descriptive and convey the purpose of the code. +- **Comments:** Clear and concise comments should be used where necessary to explain complex logic or provide context. + +## Code Structure + +- **Modularity:** Code should be organized into modular components or functions, promoting reusability and maintainability. +- **Appropriate Use of Functions/Methods:** Functions should have a clear purpose and adhere to the single responsibility principle. +- **Hierarchy and Nesting:** Avoid overly nested structures; use appropriate levels of indentation to enhance readability. + +## Efficiency and Performance + +- **Optimized Algorithms:** Code should use efficient algorithms and data structures to achieve good performance. +- **Avoidance of Code Smells:** Detect and eliminate code smells such as duplicated code, unnecessary complexity, or anti-patterns. + +## Error Handling + +- **Effective Error Messages:** Error messages should be clear and provide useful information for debugging. +- **Graceful Error Handling:** Code should handle errors gracefully, avoiding crashes and providing appropriate feedback. + +## Testing + +- **Comprehensive Test Coverage:** Code should be accompanied by a suite of tests that cover different scenarios, ensuring reliability and maintainability. +- **Test Readability:** Tests should be clear and easy to understand, serving as documentation for the codebase. + +## Security + +- **Input Validation:** Code should validate and sanitize inputs. + +## Documentation + +- **Code Comments:** In addition to in-code comments, consider external documentation for the overall project structure, APIs, and configurations. +- **README Files:** Include a well-written README file that provides an overview of the project, installation instructions, and usage examples. + +## Version Control + +- **Commit Messages:** Use descriptive and meaningful commit messages to track changes effectively. + - [commit](./commit.md) +- **Branching Strategy:** Follow a consistent and well-defined branching strategy to manage code changes. + +## Scalability + +- **Avoid Hardcoding:** Parameterize values that might change, making it easier to scale the application. +- **Optimized Resource Usage:** Ensure efficient utilization of resources to support scalability. + +## Consistency with Coding Standards + +- **Adherence to Coding Guidelines:** Follow established coding standards and best practices for the programming language or framework used. diff --git a/doc/development/dvc.md b/doc/development/dvc.md index 52491342..abaa2ac3 100644 --- a/doc/development/dvc.md +++ b/doc/development/dvc.md @@ -4,22 +4,7 @@ **Summary:** This page describes what DVC is and how/where to use it. ---- - -## Author - -Tim Dreier - -## Date - -8.12.2022 - -## Table of contents - - [Data Version Control (DVC)](#data-version-control-dvc) - - [Author](#author) - - [Date](#date) - - [Table of contents](#table-of-contents) - [General](#general) - [Installation](#installation) - [Storage](#storage) @@ -37,7 +22,6 @@ Tim Dreier - [Commit an experiment](#commit-an-experiment) - [Dvclive](#dvclive) - [Example](#example) - ## General diff --git a/doc/development/git_workflow.md b/doc/development/git_workflow.md index a90a7d1e..b72cb958 100644 --- a/doc/development/git_workflow.md +++ b/doc/development/git_workflow.md @@ -4,24 +4,7 @@ **Summary:** This page gives an overview over different types of git workflows to choose from. ---- - -## Author - -Josef Kircher - -## Date - -07.11.2022 - -## Prerequisite - ---- - - [Git Style](#git-style) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Git workflow](#git-workflow) - [Git Feature Branch](#git-feature-branch) - [Branch strategy](#branch-strategy) @@ -35,7 +18,6 @@ Josef Kircher - [Commit messages](#commit-messages) - [Git commands cheat sheet](#git-commands-cheat-sheet) - [Sources](#sources) - ## Git workflow diff --git a/doc/development/installing_cuda.md b/doc/development/installing_cuda.md index 566c3b1a..622c5fad 100644 --- a/doc/development/installing_cuda.md +++ b/doc/development/installing_cuda.md @@ -4,15 +4,10 @@ **Summary:** This page gives a short overview how to install cuda-toolkit on your computer. ---- - -## Author - -Marco Riedenauer - -## Date - -10.01.2023 +- [Install cuda-toolkit](#install-cuda-toolkit) + - [First install](#first-install) + - [Common Problems](#common-problems) + - [Wrong version of cuda-toolkit installed](#wrong-version-of-cuda-toolkit-installed) ## First install diff --git a/doc/development/installing_python_packages.md b/doc/development/installing_python_packages.md index 7cb876a6..fc908d9a 100644 --- a/doc/development/installing_python_packages.md +++ b/doc/development/installing_python_packages.md @@ -4,15 +4,8 @@ **Summary:** This page gives a short overview how to add python packages to the project. ---- - -## Author - -Tim Dreier - -## Date - -7.12.2022 +- [Install python packages](#install-python-packages) + - [Adding packages with pip](#adding-packages-with-pip) ## Adding packages with pip diff --git a/doc/development/linter_action.md b/doc/development/linter_action.md index c71154dc..3cc5da67 100644 --- a/doc/development/linter_action.md +++ b/doc/development/linter_action.md @@ -4,28 +4,12 @@ **Summary:** This page explains the GitHub lint action we use to unsure Code quality. ---- - -## Author - -Tim Dreier - -## Date - -25.11.2022 - -## Table of contents - - [Github actions](#github-actions) - - [Author](#author) - - [Date](#date) - - [Table of contents](#table-of-contents) - [General](#general) - [Pull requests](#pull-requests) - [🚨 Common Problems](#-common-problems) - [1. Error in the markdown linter](#1-error-in-the-markdown-linter) - [2. Error in the python linter](#2-error-in-the-python-linter) - ## General @@ -41,7 +25,7 @@ on: pull_request The actions uses the same linters described in the section [Linting](./linting.md). -Event though the linters are already executed during commit, +Event though the linters are already active during development, the execution on pull request ensures that nobody skips the linter during commit. ## Pull requests diff --git a/doc/development/linting.md b/doc/development/linting.md index 728a1500..16d887e8 100644 --- a/doc/development/linting.md +++ b/doc/development/linting.md @@ -2,7 +2,12 @@ (Kept from previous group [paf22]) -To ensure unified standards in the project, the following linters are applied during commit. +**Summary:** To ensure unified standards in the project, the following linters are applied during commit. + +- [Linting](#linting) + - [🐍 Python conventions](#-python-conventions) + - [💬 Markdown Linter](#-markdown-linter) + - [🚨 Common Problems](#-common-problems) ## 🐍 Python conventions diff --git a/doc/development/project_management.md b/doc/development/project_management.md index e9917e5f..6ac08813 100644 --- a/doc/development/project_management.md +++ b/doc/development/project_management.md @@ -5,25 +5,7 @@ **Summary:** We use a [Github Project](https://github.com/users/ll7/projects/2) for project management. Any bugs or features requests are managed in Github. ---- - -## Author - -- Tim Dreier -- Josef Kircher - -## Date - -23.11.2022 - -## Prerequisite - ---- - - [Project management](#project-management) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Create bug or feature requests](#create-bug-or-feature-requests) - [🐞 Bug](#-bug) - [Example for "Bug"](#example-for-bug) @@ -34,9 +16,6 @@ Any bugs or features requests are managed in Github. - [Create a Pull Request](#create-a-pull-request) - [Merging a Pull Request](#merging-a-pull-request) - [Deadlines for pull requests and reviews](#deadlines-for-pull-requests-and-reviews) - - ---- ## Create bug or feature requests diff --git a/doc/development/review_guideline.md b/doc/development/review_guideline.md index dc0fc446..5dcd76ff 100644 --- a/doc/development/review_guideline.md +++ b/doc/development/review_guideline.md @@ -4,24 +4,7 @@ **Summary:** This page gives an overview over the steps that should be taken during a review and how to give a helpful and constructive review ---- - -## Author - -Josef Kircher - -## Date - -17.11.2022 - -## Prerequisite - ---- - - [Review Guidelines](#review-guidelines) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [How to review](#how-to-review) - [How to comment on a pull request](#how-to-comment-on-a-pull-request) - [Incorporating feedback](#incorporating-feedback) @@ -30,9 +13,6 @@ Josef Kircher - [Re-requesting a review](#re-requesting-a-review) - [Resolving conversations](#resolving-conversations) - [Sources](#sources) - - ---- ## How to review diff --git a/doc/development/templates/template_component_readme.md b/doc/development/templates/template_component_readme.md deleted file mode 100644 index 4bc2a93b..00000000 --- a/doc/development/templates/template_component_readme.md +++ /dev/null @@ -1,27 +0,0 @@ -# Module title (e.g Perception) - -## About - -Description of module - -## Components - -Listing of all components used in this module - -## ROS Data Interface - -### Published topics - -Topics this module publishes to - -### Subscribed topics - -Topics this module subscribed to - -## Build Node + Run Tests - -How to build this component (in Docker) and run the tests if any available - -## Source - -Inspired by PAF 21-1 diff --git a/doc/development/templates/template_wiki_page.md b/doc/development/templates/template_wiki_page.md index 8679286f..95e95bff 100644 --- a/doc/development/templates/template_wiki_page.md +++ b/doc/development/templates/template_wiki_page.md @@ -2,167 +2,23 @@ **Summary:** This page functions a template for who to build knowledge articles for everyone to understand. The basic structure should be kept for all articles. This template further contains a cheat sheet with the most useful markdown syntax. ---- - -## Author - -Josef Kircher - -## Date - -04.11.2022 - -## Prerequisite - -VSCode Extensions: - -- Markdown All in One by Yu Zhang (for TOC) - ---- - -How to generate a TOC in VSCode: - -VSCode: - -1. ``Ctrl+Shift+P`` -2. Command "Create Table of Contents" - - - [Title of wiki page](#title-of-wiki-page) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - - [Cheat Sheet](#cheat-sheet) - - [Basics](#basics) - - [Extended](#extended) - - [My Great Heading {#custom-id}](#my-great-heading-custom-id) + - [Generate Table of Contents](#generate-table-of-contents) + - [Some Content](#some-content) - [more Content](#more-content) - - [Sources](#sources) - - -## Cheat Sheet - -### Basics - ---- - -Headings: - -(# H1) - -(## H2) - -(### H3) - ---- -Bold **bold text** - ---- -Italic *italicized text* - ---- -Blockquote - -> blockquote ---- -Ordered List - -1. First item -2. Second item -3. Third item - ---- -Unordered List - -- First item -- Second item -- Third item + - [Sources](#sources) ---- -Code +## Generate Table of Contents -`code` +How to generate a TOC in VS Code: ---- - -Horizontal Rule - ---- - -Link -[title](https://www.example.com) - ---- -Image -![alt text](image.jpg) - -### Extended - ---- -Table -| Syntax | Description | -| ----------- | ----------- | -| Header | Title | -| Paragraph | Text | - ---- -Fenced Code Block - -```python -{ - "firstName": "John", - "lastName": "Smith", - "age": 25 -} -``` - ---- -Footnote - -Here's a sentence with a footnote. [^1] - -[^1]: This is the footnote. - ---- -Heading ID - -#### My Great Heading {#custom-id} - ---- -Definition List -term -: definition - ---- -Strikethrough - -~~The world is flat.~~ - ---- - -Task List - -- [x] Write the press release -- [ ] Update the website - -- [ ] Contact the media - ---- - -Subscript - -H~2~O - ---- - -Superscript - -X^2^ +1. ``Ctrl+Shift+P`` +2. Command "Create Table of Contents" ---- +## Some Content ## more Content -### Sources +## Sources diff --git a/doc/development/templates/template_wiki_page_empty.md b/doc/development/templates/template_wiki_page_empty.md deleted file mode 100644 index 2992fd64..00000000 --- a/doc/development/templates/template_wiki_page_empty.md +++ /dev/null @@ -1,39 +0,0 @@ -# Title of wiki page - -**Summary:** This page functions a template for who to build knowledge articles for everyone to understand. The basic structure should be kept for all articles. This template further contains a cheat sheet with the most useful markdown syntax. - ---- - -## Author - -Josef Kircher - -## Date - -04.11.2022 - -## Prerequisite - -VSCode Extensions: - -- Markdown All in One by Yu Zhang (for TOC) - ---- - - -- [Title of wiki page](#title-of-wiki-page) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - - [Some Content](#some-content) - - [more Content](#more-content) - - [Sources](#sources) - - -## Some Content - -## more Content - -### Sources - - diff --git a/doc/examples/gps_example/gps_signal_example.md b/doc/examples/gps_example/gps_signal_example.md index 92069b4f..2fb92639 100644 --- a/doc/examples/gps_example/gps_signal_example.md +++ b/doc/examples/gps_example/gps_signal_example.md @@ -4,27 +4,12 @@ **The Filter that's currently in use: [Kalman Filter](../../perception/kalman_filter.md)** ---- - -## Authors - -Gabriel Schwald - -### Date - -07.01.2023 - ---- - - [GPS sensor](#gps-sensor) - - [Authors](#authors) - - [Date](#date) - [Raw sensor data](#raw-sensor-data) - [Filters for the sensor data](#filters-for-the-sensor-data) - [Intuitive filter](#intuitive-filter) - [Rolling average](#rolling-average) - [Kalman Filter](#kalman-filter) - ## Raw sensor data diff --git a/doc/general/architecture.md b/doc/general/architecture.md index 7451a4cf..72eacdce 100644 --- a/doc/general/architecture.md +++ b/doc/general/architecture.md @@ -3,21 +3,7 @@ **Summary:** This page gives an overview over the planned general architecture of the vehicle agent. The document contains an overview over all [nodes](#overview) and [topics](#topics). ---- - -## Authors - -Julius Miller, Alexander Hellmann, Samuel Kühnel - -## Date - -29.03.2024 - ---- - - [Planned architecture of vehicle agent](#planned-architecture-of-vehicle-agent) - - [Authors](#authors) - - [Date](#date) - [Overview](#overview) - [Perception](#perception) - [Obstacle Detection and Classification](#obstacle-detection-and-classification) @@ -35,7 +21,6 @@ Julius Miller, Alexander Hellmann, Samuel Kühnel - [Velocity control](#velocity-control) - [Vehicle controller](#vehicle-controller) - [Visualization](#visualization) - ## Overview diff --git a/doc/general/installation.md b/doc/general/installation.md index e8ac3b26..dd53ce5b 100644 --- a/doc/general/installation.md +++ b/doc/general/installation.md @@ -1,5 +1,17 @@ # 🛠️ Installation +**Summary:** This page explains the installation process for the project. + +- [🛠️ Installation](#️-installation) + - [Installation](#installation) + - [Docker with NVIDIA GPU support](#docker-with-nvidia-gpu-support) + - [Docker](#docker) + - [Allow non-root user to execute Docker commands](#allow-non-root-user-to-execute-docker-commands) + - [NVIDIA Container toolkit](#nvidia-container-toolkit) + - [VS Code Extensions](#vs-code-extensions) + - [🚨 Common Problems](#-common-problems) + - [Vulkan device not available](#vulkan-device-not-available) + To run the project you have to install [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). For development, we recommend Visual Studio Code with the plugins that are recommended inside the `.vscode` folder. diff --git a/doc/perception/coordinate_transformation.md b/doc/perception/coordinate_transformation.md index a6180ec7..f4803887 100644 --- a/doc/perception/coordinate_transformation.md +++ b/doc/perception/coordinate_transformation.md @@ -2,27 +2,11 @@ **Summary:** Used for various helper functions such as quat_to_heading, that are useful in a lot of cases. **It is not yet fully documented**. ---- - -## Author - -Robert Fischer - -## Date - -12.01.2024 - - - - [Coordinate Transformation](#coordinate-transformation) - - [Author](#author) - - [Date](#date) - [Usage](#usage) - [Methods](#methods) - [quat\_to\_heading(quaternion)](#quat_to_headingquaternion) - - ## Usage Just importing the coordinate_transformation.py file is enough to use all of its funcions. diff --git a/doc/perception/dataset_generator.md b/doc/perception/dataset_generator.md index e1a71aff..d250b24e 100644 --- a/doc/perception/dataset_generator.md +++ b/doc/perception/dataset_generator.md @@ -3,29 +3,13 @@ **Summary:** The dataset generator located in perception/src/dataset_generator.py is a standalone script, directly hooking into the Carla Python API. It is used to generate a dataset to train perception models. ---- - -## Author - -Korbinian Stein - -## Date - -24.01.2023 - - - - [Dataset generator](#dataset-generator) - - [Author](#author) - - [Date](#date) - [Necessary adjustments](#necessary-adjustments) - [Dockerfile](#dockerfile) - [docker-compose.yml](#docker-composeyml) - [Usage](#usage) - [Using with leaderboard](#using-with-leaderboard) - - ## Necessary adjustments Important to note: The dataset generator uses diff --git a/doc/perception/dataset_structure.md b/doc/perception/dataset_structure.md index 1ba48dbc..4d28e87f 100644 --- a/doc/perception/dataset_structure.md +++ b/doc/perception/dataset_structure.md @@ -2,27 +2,13 @@ **Summary:** This document gives a short overview about the structure of our dataset that is needed to train EfficientPS. ---- - -## Author - -Marco Riedenauer - -## Date - -19.02.2023 - - - [Dataset structure](#dataset-structure) - - [Author](#author) - - [Date](#date) - [Converting the dataset](#converting-the-dataset) - [Preparation of the dataset for training](#preparation-of-the-dataset-for-training) - [Explanation of the conversion of groundtruth images](#explanation-of-the-conversion-of-groundtruth-images) - [Things](#things) - [Stuff](#stuff) - [Explanation of creating json files](#explanation-of-creating-json-files) - ## Converting the dataset diff --git a/doc/perception/distance_to_objects.md b/doc/perception/distance_to_objects.md index 684ac59f..07caf2cc 100644 --- a/doc/perception/distance_to_objects.md +++ b/doc/perception/distance_to_objects.md @@ -1,9 +1,14 @@ # Getting the Distance to Objects -Using the vision node and the lidar distance node we can calculate the distance of detected objects. -We can solve this problem from two directions mapping either pixel into the 3D-World or mapping 3D-LidarPoints into Pixel. - -This file will will explain the mapping of 3D-Points into 2D. +**Summary:** Using the vision node and the lidar distance node we can calculate the distance of detected objects. +We can solve this problem from two directions mapping either pixel into the 3D-World or mapping 3D-LidarPoints into Pixel. This file will explain the mapping of 3D-Points into 2D. + +- [Getting the Distance to Objects](#getting-the-distance-to-objects) + - [Converting 3D-Points into 2D-Camera-Space](#converting-3d-points-into-2d-camera-space) + - [Concept](#concept) + - [Purpose](#purpose) + - [Implementation](#implementation) + - [LIDAR-Configuration](#lidar-configuration) ## Converting 3D-Points into 2D-Camera-Space diff --git a/doc/perception/efficientps.md b/doc/perception/efficientps.md index 7664cd6e..9498b27c 100644 --- a/doc/perception/efficientps.md +++ b/doc/perception/efficientps.md @@ -4,26 +4,12 @@ **Summary:** This document gives a short overview about EfficientPS and its training process. ---- - -## Author - -Marco Riedenauer - -## Date - -28.03.2023 - - - [EfficientPS](#efficientps) - - [Author](#author) - - [Date](#date) - [Model Overview](#model-overview) - [Training](#training) - [Labels](#labels) - [Training parameters](#training-parameters) - [Train](#train) - ## Model Overview diff --git a/doc/perception/kalman_filter.md b/doc/perception/kalman_filter.md index df59b3f7..e7c676e1 100644 --- a/doc/perception/kalman_filter.md +++ b/doc/perception/kalman_filter.md @@ -8,24 +8,7 @@ As of now it is working with a 2D x-y-Transition model, which is why the current This implements the STANDARD Kalman Filter and NOT the Extended Kalman Filter or any other non-linear variant of the Kalman Filter. ---- - -## Author - -Robert Fischer - -## Date - -29.03.2024 - -## Prerequisite - ---- - - [Kalman Filter](#kalman-filter) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Getting started](#getting-started) - [Description](#description) - [1. Predict](#1-predict) @@ -35,9 +18,6 @@ Robert Fischer - [Inputs](#inputs) - [Outputs](#outputs) - [Performance](#performance) - - ---- ## Getting started diff --git a/doc/perception/lidar_distance_utility.md b/doc/perception/lidar_distance_utility.md index fbcf2b7f..4a93449d 100644 --- a/doc/perception/lidar_distance_utility.md +++ b/doc/perception/lidar_distance_utility.md @@ -12,24 +12,9 @@ Additionally, it publishes a [Range](http://docs.ros.org/en/melodic/api/sensor_m containing the closest and the farest point. This can then be used to detect the distance to the closest object in front of us. ---- - -## Author - -Tim Dreier - -## Date - -16.03.2023 - ---- - - [Lidar Distance Utility](#lidar-distance-utility) - - [Author](#author) - - [Date](#date) - [Configuration](#configuration) - [Example](#example) - ## Configuration diff --git a/doc/perception/position_heading_filter_debug_node.md b/doc/perception/position_heading_filter_debug_node.md index 09b10551..14e56443 100644 --- a/doc/perception/position_heading_filter_debug_node.md +++ b/doc/perception/position_heading_filter_debug_node.md @@ -11,32 +11,12 @@ Using the Carla API could disqualify us from the leaderboard when submitting ont Uncomment (maybe even remove) this file when submitting to the official leaderboard. This file is only for debugging! ---- - -## Author - -Robert Fischer - -## Date - -31.03.2024 - -## Prerequisite - ---- - - [position\_heading\_filter\_debug\_node.py](#position_heading_filter_debug_nodepy) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Getting started](#getting-started) - [Description](#description) - [Inputs](#inputs) - [Outputs](#outputs) - [Visualization](#visualization) - - ---- ## Getting started diff --git a/doc/perception/position_heading_publisher_node.md b/doc/perception/position_heading_publisher_node.md index 5ff5daff..d833eef4 100644 --- a/doc/perception/position_heading_publisher_node.md +++ b/doc/perception/position_heading_publisher_node.md @@ -2,22 +2,7 @@ **Summary:** This node publishes the `current_pos` (Location of the car) and `current_heading` (Orientation of the car around the Z- axis) for every Node that needs to work with that. It also publishes all unfiltered Position and Heading signals for the Filter nodes to work with (such as Kalman). ---- - -## Author - -Robert Fischer - -## Date - -14.01.2024 - -## Prerequisite - - [position\_heading\_publisher\_node](#position_heading_publisher_node) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Usage](#usage) - [Modular Extension / Template](#modular-extension--template) - [Heading Functions](#heading-functions) diff --git a/doc/perception/traffic_light_detection.md b/doc/perception/traffic_light_detection.md index 74ed555c..81340251 100644 --- a/doc/perception/traffic_light_detection.md +++ b/doc/perception/traffic_light_detection.md @@ -1,5 +1,18 @@ # Traffic Light Detection +**Summary:** This page explains how traffic lights are detected and interpreted. + +- [Traffic Light Detection](#traffic-light-detection) + - [Vision Node](#vision-node) + - [TrafficLightNode](#trafficlightnode) + - [Attributes](#attributes) + - [Methods](#methods) + - [Functions](#functions) + - [Usage](#usage) + - [Filtering of images](#filtering-of-images) + - [1. Vision Node](#1-vision-node) + - [2. Traffic Light Node](#2-traffic-light-node) + ## Vision Node For each analyzed image, it is checked whether an object with the ID=9 (traffic light) is detected. diff --git a/doc/perception/vision_node.md b/doc/perception/vision_node.md index 3838c998..c73d7688 100644 --- a/doc/perception/vision_node.md +++ b/doc/perception/vision_node.md @@ -1,8 +1,15 @@ # Vision Node -The Visison Node provides an adaptive interface that is able to perform object-detection and/or image-segmentation on multiple cameras at the same time. +**Summary:** The Visison Node provides an adaptive interface that is able to perform object-detection and/or image-segmentation on multiple cameras at the same time. It can also subscribe to the lidar_distance publisher and calculate distances of objects inside the detected bounding boxes. +- [Vision Node](#vision-node) + - [Model overview](#model-overview) + - [How it works](#how-it-works) + - [1. Object-Detection](#1-object-detection) + - [2. Distance-Calculation](#2-distance-calculation) + - [3. Publishing of Outputs](#3-publishing-of-outputs) + ## Model overview The Vision-Node implements an interface for a lot of different models which can be specified in the perception launch file. diff --git a/doc/planning/ACC.md b/doc/planning/ACC.md index 012e45c6..95621f09 100644 --- a/doc/planning/ACC.md +++ b/doc/planning/ACC.md @@ -1,12 +1,12 @@ # ACC (Adaptive Cruise Control) -## About +**Summary:** The ACC module is a ROS node responsible for adaptive speed control in an autonomous vehicle. It receives information about possible collisions, the current speed, the trajectory, and the speed limits. Based on this information, it calculates the desired speed and publishes it. -The ACC module is a ROS node responsible for adaptive speed control in an autonomous vehicle. It receives information about possible collisions, the current speed, the trajectory, and the speed limits. Based on this information, it calculates the desired speed and publishes it. - -## Components - -This module doesn't contain more components. +- [ACC (Adaptive Cruise Control)](#acc-adaptive-cruise-control) + - [ROS Data Interface](#ros-data-interface) + - [Published Topics](#published-topics) + - [Subscribed Topics](#subscribed-topics) + - [Node Creation + Running Tests](#node-creation--running-tests) ## ROS Data Interface diff --git a/doc/planning/Behavior_tree.md b/doc/planning/Behavior_tree.md index 11dc3498..aae559df 100644 --- a/doc/planning/Behavior_tree.md +++ b/doc/planning/Behavior_tree.md @@ -4,24 +4,7 @@ **Disclaimer**: As we mainly built our decision tree on the previous projects [psaf2](https://github.com/ll7/psaf2) and [paf22](https://github.com/ll7/paf22) , most part of the documentation was added here and adjusted to the changes we made. ---- - -## Author - -Julius Miller - -## Date - -01.04.2024 - -## Prerequisite - ---- - - [Behavior Tree](#behavior-tree) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [About](#about) - [Our behaviour tree](#our-behaviour-tree) - [Behavior](#behavior) @@ -44,7 +27,6 @@ Julius Miller - [`initialise()`](#initialise) - [`update()`](#update) - [`terminate()`](#terminate) - ## About diff --git a/doc/planning/Collision_Check.md b/doc/planning/Collision_Check.md index 67382dc5..d3e88572 100644 --- a/doc/planning/Collision_Check.md +++ b/doc/planning/Collision_Check.md @@ -1,9 +1,14 @@ # Collision Check -## Overview - -This module is responsible for detecting collisions and reporting them. It subscribes to topics that provide information about the current speed of the vehicle and the distances to objects detected by a LIDAR sensor. -It publishes topics that provide information about emergency stops, the distance to collisions, the distance to oncoming traffic, and the approximated speed of the obstacle in front +**Summary:** This module is responsible for detecting collisions and reporting them. It subscribes to topics that provide information about the current speed of the vehicle and the distances to objects detected by a LIDAR sensor. +It publishes topics that provide information about emergency stops, the distance to collisions, the distance to oncoming traffic, and the approximated speed of the obstacle in front. + +- [Collision Check](#collision-check) + - [Component](#component) + - [ROS Data Interface](#ros-data-interface) + - [Published Topics](#published-topics) + - [Subscribed Topics](#subscribed-topics) + - [Node Creation + Running Tests](#node-creation--running-tests) ## Component diff --git a/doc/planning/Global_Planner.md b/doc/planning/Global_Planner.md index 10ef74d6..2d3e4ea9 100644 --- a/doc/planning/Global_Planner.md +++ b/doc/planning/Global_Planner.md @@ -7,34 +7,14 @@ After finishing that this node initiates the calculation of a trajectory based o from preplanning_trajectory.py. In the end the computed trajectory and prevailing speed limits are published to the other components of this project (acting, decision making,...). ---- - -## Author - -Samuel Kühnel - -## Date - -29.03.2024 - -## Note - This component and so most of the documentation was taken from the previous project PAF22 (Authors: Simon Erlbacher, Niklas Vogel) ---- - - [Global Planner](#global-planner) - - [Author](#author) - - [Date](#date) - - [Note](#note) - [Getting started](#getting-started) - [Description](#description) - [Inputs](#inputs) - [Outputs](#outputs) - [Testing](#testing) - - ---- ## Getting started diff --git a/doc/planning/Local_Planning.md b/doc/planning/Local_Planning.md index 4624fd83..22930f5d 100644 --- a/doc/planning/Local_Planning.md +++ b/doc/planning/Local_Planning.md @@ -2,24 +2,7 @@ **Summary:** This page contains the conceptual and theoretical explanations for the Local Planning component. For more technical documentation have a look at the other linked documentation files. ---- - -## Author - -Samuel Kühnel - -## Date - -29.03.2024 - -## Prerequisite - ---- - - [Local Planning](#local-planning) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Overview](#overview) - [Collision Check](#collision-check) - [Apply filters](#apply-filters) @@ -31,7 +14,7 @@ Samuel Kühnel - [Selecting the target velocity](#selecting-the-target-velocity) - [Moving the trajectory](#moving-the-trajectory) - [Sources](#sources) - + ## Overview The Local Planning component is responsible for evaluating short term decisions in the local environment of the ego vehicle. Some examples can be collision avoidance, reducing speed or emergency brakes. diff --git a/doc/planning/Preplanning.md b/doc/planning/Preplanning.md index 252e2ce9..4475c290 100644 --- a/doc/planning/Preplanning.md +++ b/doc/planning/Preplanning.md @@ -2,26 +2,7 @@ **Summary:** Preplanner holds the logic to create a trajectory out of an OpenDrive Map with the belonging road options ---- - -## Author - -Authors: Simon Erlbacher, Niklas Vogel - -## Date - -29.03.2023 - -## Note - -The Preplanning component was taken from the previous project PAF22. - ---- - - [Preplanning](#preplanning) - - [Author](#author) - - [Date](#date) - - [Note](#note) - [Getting started](#getting-started) - [Road option concept](#road-option-concept) - [Road information](#road-information) @@ -29,9 +10,6 @@ The Preplanning component was taken from the previous project PAF22. - [Road interpolation](#road-interpolation) - [How to use the implementation](#how-to-use-the-implementation) - [Sources](#sources) - - ---- ## Getting started diff --git a/doc/planning/README.md b/doc/planning/README.md index 7123c208..378d94bd 100644 --- a/doc/planning/README.md +++ b/doc/planning/README.md @@ -1,23 +1,5 @@ # Planning Wiki ---- - -## Structure - -Planning wiki contains different parts: - - - -- [Planning Wiki](#planning-wiki) - - [Structure](#structure) - - [Overview](#overview) - - [Preplanning](#preplanning) - - [Global plan](#global-plan) - - [Decision making](#decision-making) - - [Local Planning](#local-planning) - ---- - ## Overview ### [Preplanning](./Preplanning.md) diff --git a/doc/planning/Unstuck_Behavior.md b/doc/planning/Unstuck_Behavior.md index 3990fa69..99c408d5 100644 --- a/doc/planning/Unstuck_Behavior.md +++ b/doc/planning/Unstuck_Behavior.md @@ -2,23 +2,8 @@ **Summary:** This file explains the unstuck behavior used as a fallback to recover from stuck situations. ---- - -## Author - -Robert Fischer - -## Date - -01.04.2024 - ---- - - [Unstuck Behavior](#unstuck-behavior) - - [Author](#author) - - [Date](#date) - [Explanation](#explanation) - ## Explanation diff --git a/doc/planning/motion_planning.md b/doc/planning/motion_planning.md index 125e4ebe..dd05ed42 100644 --- a/doc/planning/motion_planning.md +++ b/doc/planning/motion_planning.md @@ -3,33 +3,13 @@ **Summary:** [motion_planning.py](.../code/planning/local_planner/src/motion_planning.py): The motion planning is responsible for collecting all the speeds from the different components and choosing the optimal one to be fowarded into the acting. It also is capabale to change the trajectory for a overtaking maneuver. ---- - -## Author - -Julius Miller - -## Date - -31.03.2023 - -## Prerequisite - ---- - - [Motion Planning](#motion-planning) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Overview](#overview) - [Component](#component) - [ROS Data Interface](#ros-data-interface) - [Subscribed Topics](#subscribed-topics) - [Published Topics](#published-topics) - [Node Creation + Running Tests](#node-creation--running-tests) - - ---- ## Overview diff --git a/doc/planning/py_trees.md b/doc/planning/py_trees.md index bbc9fb91..4e12d604 100644 --- a/doc/planning/py_trees.md +++ b/doc/planning/py_trees.md @@ -2,32 +2,13 @@ **Summary:** pytrees is a python library used to generate and inspect decision trees. It has a very clear structure and is easy to understand, so it is used in this project. ---- - -## Author - -Josef Kircher - -## Date - -31.01.2023 - -## Note - -This documentation was taken from the previous project PAF22. - ---- - - [Pytrees](#pytrees) - - [Author](#author) - - [Date](#date) - - [Note](#note) - [Getting started](#getting-started) - [What is Pytrees?](#what-is-pytrees) - [Examples](#examples) - [Common commands](#common-commands) - [Sources](#sources) - + ## Getting started Pytrees is integrated in this project's dockerfile, so no setup is required. diff --git a/doc/research/paf22/acting/basics_acting.md b/doc/research/paf22/acting/basics_acting.md index 3b4dce32..d692baea 100644 --- a/doc/research/paf22/acting/basics_acting.md +++ b/doc/research/paf22/acting/basics_acting.md @@ -2,18 +2,22 @@ **Summary:** On this page you can find the results of the basic research on acting. ---- - -## Authors - -Gabriel Schwald, Julian Graf - -### Date - -14.11.2022 - ---- -[[TOC]] +- [Basic research acting](#basic-research-acting) + - [Objective](#objective) + - [Solutions from old PAF projects](#solutions-from-old-paf-projects) + - [Paf 20/1](#paf-201) + - [Paf 21/1](#paf-211) + - [Paf 20/2 and Paf 21/2](#paf-202-and-paf-212) + - [Lateral control](#lateral-control) + - [Pure Pursuit](#pure-pursuit) + - [Stanley](#stanley) + - [MPC (Model Predictive Control) / receding horizon control](#mpc-model-predictive-control--receding-horizon-control) + - [SMC (sliding mode control)](#smc-sliding-mode-control) + - [Velocity control](#velocity-control) + - [Interface](#interface) + - [Limits](#limits) + - [Visualization](#visualization) + - [Additional functionality (open for discussion)](#additional-functionality-open-for-discussion) ## Objective diff --git a/doc/research/paf22/acting/implementation_acting.md b/doc/research/paf22/acting/implementation_acting.md index fc763b36..83f89e51 100644 --- a/doc/research/paf22/acting/implementation_acting.md +++ b/doc/research/paf22/acting/implementation_acting.md @@ -2,30 +2,14 @@ **Summary:** On this page you can find the results of the basic research on acting summed up into resulting requirements and function, that were already agreed upon. ---- - -## Authors - -Gabriel Schwald - -### Date - -20.11.2022 - ---- +This document sums up all functions already agreed upon in [#24](https://github.com/ll7/paf22/issues/24) regarding [acting](../acting/acting.md), that could be implemented in the next sprint. - - [Requirements and challenges for an acting implementation](#requirements-and-challenges-for-an-acting-implementation) - - [Authors](#authors) - - [Date](#date) - [Planned basic implementation of the Acting domain](#planned-basic-implementation-of-the-acting-domain) - [List of basic functions](#list-of-basic-functions) - [List of Inputs/Outputs](#list-of-inputsoutputs) - [Challenges](#challenges) - [Next steps](#next-steps) - - -This document sums up all functions already agreed upon in [#24](https://github.com/ll7/paf22/issues/24) regarding [acting](../acting/acting.md), that could be implemented in the next sprint. ## Planned basic implementation of the Acting domain diff --git a/doc/research/paf22/perception/basics.md b/doc/research/paf22/perception/basics.md index b5f921dc..d9f2bcc3 100644 --- a/doc/research/paf22/perception/basics.md +++ b/doc/research/paf22/perception/basics.md @@ -1,11 +1,29 @@ # Basic research perception -The perception is responsible for the efficient conversion of raw sensor and map data +**Summary:** The perception is responsible for the efficient conversion of raw sensor and map data into a useful environment representation that can be used by the planning for further processing. This includes the classification and localization of relevant entities in traffic and also the preparation of this data to enable a fast processing of this data in the planning layer. +- [Basic research perception](#basic-research-perception) + - [Interfaces](#interfaces) + - [Input](#input) + - [Output](#output) + - [Environment](#environment) + - [What objects have to be detected?](#what-objects-have-to-be-detected) + - [Special case traffic light (PAF21-1)](#special-case-traffic-light-paf21-1) + - [Algorithms for classification/localization](#algorithms-for-classificationlocalization) + - [Prediction](#prediction) + - [Map data](#map-data) + - [Limitations of the sensors and perception](#limitations-of-the-sensors-and-perception) + - [LIDAR](#lidar) + - [RADAR](#radar) + - [Camera](#camera) + - [Training data](#training-data) + - [Classification of situations](#classification-of-situations) + - [Combination of 2D camera data and 3D RADAR/LIDAR data](#combination-of-2d-camera-data-and-3d-radarlidar-data) + ## Interfaces ### Input diff --git a/doc/research/paf22/perception/first_implementation_plan.md b/doc/research/paf22/perception/first_implementation_plan.md index 4640b2cf..8bc88aba 100644 --- a/doc/research/paf22/perception/first_implementation_plan.md +++ b/doc/research/paf22/perception/first_implementation_plan.md @@ -1,23 +1,9 @@ # First Implementation Plan -This document shows the initial ideas for the implementation of the perception module. +**Summary:** This document shows the initial ideas for the implementation of the perception module. It includes the various detection and classification modules that are necessary for an efficient and reliable workflow. ---- - -## Authors - -Marco Riedenauer - -## Date - -26.11.2022 - ---- - - [First Implementation Plan](#first-implementation-plan) - - [Authors](#authors) - - [Date](#date) - [Overview](#overview) - [Panoptic Segmentation](#panoptic-segmentation) - [Things and Stuff](#things-and-stuff) @@ -33,9 +19,6 @@ Marco Riedenauer - [Traffic Sign Detection](#traffic-sign-detection) - [Prediction](#prediction) - [Possible Issues/Milestones](#possible-issuesmilestones) - - ---- ## Overview diff --git a/doc/research/paf22/planning/Implementation.md b/doc/research/paf22/planning/Implementation.md index 4a5c9f7d..7af7cd55 100644 --- a/doc/research/paf22/planning/Implementation.md +++ b/doc/research/paf22/planning/Implementation.md @@ -1,24 +1,9 @@ # Planning Implementation -**Summary:** -The document gives a first impression of how the planning could/should work +**Summary:** The document gives a first impression of how the planning could/should work and how the topics are edited ---- - -## Authors - -Simon Erlbacher, Niklas Vogel - -## Date - -29.11.2022 - ---- - - [Planning Implementation](#planning-implementation) - - [Authors](#authors) - - [Date](#date) - [Overview](#overview) - [Preplanning](#preplanning) - [Decision Making](#decision-making) @@ -28,9 +13,6 @@ Simon Erlbacher, Niklas Vogel - [Measure distance](#measure-distance) - [Next steps](#next-steps) - [Sources](#sources) - - ---- ## Overview diff --git a/doc/research/paf22/planning/Navigation_Data.md b/doc/research/paf22/planning/Navigation_Data.md index 18611513..f16cc211 100644 --- a/doc/research/paf22/planning/Navigation_Data.md +++ b/doc/research/paf22/planning/Navigation_Data.md @@ -2,26 +2,11 @@ **Summary:** This page gives an overview and summary of how navigation data can be received, how it is structured and a visualisation of where the route instructions are placed on the ego vehicle route. ---- - -## Author - -Niklas Vogel - -## Date - -14.12.2022 - ---- - - [Navigation Data Research](#navigation-data-research) - - [Author](#author) - - [Date](#date) - [How to receive navigation data](#how-to-receive-navigation-data) - [Structure of navigation data](#structure-of-navigation-data) - [Visualisation of received navigation data](#visualisation-of-received-navigation-data) - [Sources](#sources) - ## How to receive navigation data diff --git a/doc/research/paf22/planning/OpenDrive.md b/doc/research/paf22/planning/OpenDrive.md index 99999139..84895cd3 100644 --- a/doc/research/paf22/planning/OpenDrive.md +++ b/doc/research/paf22/planning/OpenDrive.md @@ -2,22 +2,7 @@ **Summary:** Evaluate the reading of the OpenDrive map in other projects and outline recommended further steps. ---- - -## Authors - -Simon Erlbacher - -### Date - -10.01.2023 - ---- - - - [OpenDrive Format](#opendrive-format) - - [Authors](#authors) - - [Date](#date) - [General](#general) - [Different Projects](#different-projects) - [PSAF1](#psaf1) @@ -30,7 +15,6 @@ Simon Erlbacher - [Implementation details](#implementation-details) - [Follow-up Issues](#follow-up-issues) - [Sources](#sources) - ## General diff --git a/doc/research/paf22/planning/basics.md b/doc/research/paf22/planning/basics.md index bd40b06e..093dcb23 100644 --- a/doc/research/paf22/planning/basics.md +++ b/doc/research/paf22/planning/basics.md @@ -1,18 +1,8 @@ # Grundrecherche im Planing -## Authors +**Summary:** This page contains the resarch of planning components of previous years. -Simon Erlbacher, Niklas Vogel - -## Datum - -15.11.2022 - ---- - - [Grundrecherche im Planing](#grundrecherche-im-planing) - - [Authors](#authors) - - [Datum](#datum) - [PAF 2021-1](#paf-2021-1) - [Vehicle Controller](#vehicle-controller) - [Decision-Making-Component](#decision-making-component) @@ -36,7 +26,6 @@ Simon Erlbacher, Niklas Vogel - [Wie sieht die Grenze zwischen global und local plan aus?](#wie-sieht-die-grenze-zwischen-global-und-local-plan-aus) - [Müssen Staus umfahren werden?](#müssen-staus-umfahren-werden) - [Sollgeschwindigkeitsplanung](#sollgeschwindigkeitsplanung) - ## [PAF 2021-1](https://github.com/ll7/paf21-1) diff --git a/doc/research/paf22/planning/decision_making.md b/doc/research/paf22/planning/decision_making.md index eea8b21f..70bc1cab 100644 --- a/doc/research/paf22/planning/decision_making.md +++ b/doc/research/paf22/planning/decision_making.md @@ -2,24 +2,7 @@ **Summary:** This page gives a brief summary over possible decision-making choices their ad- and disadvantages as well as the opportunity to interchange them later on. Also, possible implementation options for those concepts are given. ---- - -## Author - -Josef Kircher - -## Date - -01.12.2022 - -## Prerequisite - ---- - - [Decision-making module](#decision-making-module) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Decision-making algorithms](#decision-making-algorithms) - [Finite State machine](#finite-state-machine) - [Advantages](#advantages) @@ -54,7 +37,6 @@ Josef Kircher - [pytrees](#pytrees) - [Conclusion](#conclusion) - [Sources](#sources) - ## Decision-making algorithms diff --git a/doc/research/paf22/planning/reevaluation_desicion_making.md b/doc/research/paf22/planning/reevaluation_desicion_making.md index f6492d3c..2e4eaa75 100644 --- a/doc/research/paf22/planning/reevaluation_desicion_making.md +++ b/doc/research/paf22/planning/reevaluation_desicion_making.md @@ -2,24 +2,7 @@ **Summary:** This page gives a foundation for the re-evaluation of the decision-making ---- - -## Author - -Josef Kircher - -## Date - -26.01.2023 - -## Prerequisite - ---- - - [Re-evaluation of decision making component](#re-evaluation-of-decision-making-component) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Reasons for re-evaluation](#reasons-for-re-evaluation) - [Options](#options) - [Pylot](#pylot) @@ -28,7 +11,7 @@ Josef Kircher - [Cons](#cons) - [Conclusion](#conclusion) - [Sources](#sources) - + ## Reasons for re-evaluation In the last sprint, I tried to get a graphic tool to work with the docker container withing the project. That failed, but I still think, that a graphical representation would be helpful. diff --git a/doc/research/paf22/planning/state_machine_design.md b/doc/research/paf22/planning/state_machine_design.md index 53bed6fb..2bf3ad2b 100644 --- a/doc/research/paf22/planning/state_machine_design.md +++ b/doc/research/paf22/planning/state_machine_design.md @@ -2,21 +2,7 @@ **Summary:** This page gives an overview of the design of the state machine and further describes states and transitions. ---- - -## Author - -Josef Kircher - -## Date - -09.12.2022 - ---- - - [State machine design](#state-machine-design) - - [Author](#author) - - [Date](#date) - [Super state machine](#super-state-machine) - [Driving state machine](#driving-state-machine) - [KEEP](#keep) @@ -40,7 +26,6 @@ Josef Kircher - [STOP\_GO](#stop_go) - [Implementation](#implementation) - [Sources](#sources) - ## Super state machine diff --git a/doc/research/paf22/requirements/informations_from_leaderboard.md b/doc/research/paf22/requirements/informations_from_leaderboard.md index 25ce6b78..9b47caa2 100644 --- a/doc/research/paf22/requirements/informations_from_leaderboard.md +++ b/doc/research/paf22/requirements/informations_from_leaderboard.md @@ -2,27 +2,7 @@ **Summary:** This page contains the project informations from the CARLA leaderboard. More specific summary after page is finished. ---- - -## Author - -Josef Kircher - -## Date - -17.11.2022 - -## Prerequisite - -none - ---- - - - [Requirements of Carla Leaderboard](#requirements-of-carla-leaderboard) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Task](#task) - [Participation modalities](#participation-modalities) - [Route format](#route-format) @@ -34,9 +14,6 @@ none - [Shutdown criteria](#shutdown-criteria) - [Submission](#submission) - [Sources](#sources) - - ---- ## Task diff --git a/doc/research/paf22/requirements/requirements.md b/doc/research/paf22/requirements/requirements.md index 8612e3c3..9cd7ca46 100644 --- a/doc/research/paf22/requirements/requirements.md +++ b/doc/research/paf22/requirements/requirements.md @@ -2,29 +2,11 @@ **Summary:** This page contains the requirements obtained from the Carla Leaderboard website as well as former projects in the `Praktikum Autonomes Fahren` ---- - -## Author - -Josef Kircher, Simon Erlbacher - -## Date - -17.11.2022 - -## Prerequisite - ---- - - [Requirements](#requirements) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [Requirements from Leaderboard tasks](#requirements-from-leaderboard-tasks) - [Prioritized driving aspects](#prioritized-driving-aspects) - [more Content](#more-content) - [Sources](#sources) - ## Requirements from Leaderboard tasks diff --git a/doc/research/paf22/requirements/use_cases.md b/doc/research/paf22/requirements/use_cases.md index cf9a6570..984d23e9 100644 --- a/doc/research/paf22/requirements/use_cases.md +++ b/doc/research/paf22/requirements/use_cases.md @@ -2,24 +2,7 @@ **Summary:** This page contains a set of possible use cases containing a description of the scenario, the functions the agent has to have to pass that scenario as well as the event triggering that use case, the flow through that use case and the outcome. ---- - -## Author - -Josef Kircher - -## Date - -21.11.2022 - -## Prerequisite - ---- - - [Use cases in Carla Leaderboard](#use-cases-in-carla-leaderboard) - - [Author](#author) - - [Date](#date) - - [Prerequisite](#prerequisite) - [1. Control loss due to bad road condition](#1-control-loss-due-to-bad-road-condition) - [Description](#description) - [Pre-condition(Event)](#pre-conditionevent) @@ -173,9 +156,6 @@ Josef Kircher - [Outcome](#outcome-21) - [Associated use cases](#associated-use-cases-21) - [Sources](#sources) - - ---- ## 1. Control loss due to bad road condition diff --git a/doc/research/paf23/acting/autoware_acting.md b/doc/research/paf23/acting/autoware_acting.md index bb84218f..bf1900a6 100644 --- a/doc/research/paf23/acting/autoware_acting.md +++ b/doc/research/paf23/acting/autoware_acting.md @@ -1,5 +1,15 @@ # Research: [Autoware Acting](https://autowarefoundation.github.io/autoware-documentation/main/design/autoware-architecture/control/#autoware-control-design) +**Summary:** This page contains the research into the action component of Autoware. + +- [Research: Autoware Acting](#research-autoware-acting) + - [Inputs](#inputs) + - [General Component Architecture](#general-component-architecture) + - [With the Control Module](#with-the-control-module) + - [Trajectory Follower](#trajectory-follower) + - [Vehicle Command Gate](#vehicle-command-gate) + - [Outputs](#outputs) + ## Inputs - Odometry (position and orientation, from Localization module) diff --git a/doc/research/paf23/acting/paf21_1_acting.md b/doc/research/paf23/acting/paf21_1_acting.md index 0b03937d..d6aae3d2 100644 --- a/doc/research/paf23/acting/paf21_1_acting.md +++ b/doc/research/paf23/acting/paf21_1_acting.md @@ -1,5 +1,15 @@ # Research: PAF21_1 Acting +**Summary:** This page contains the research into the action component of the PAF21_1 group. + +- [Research: PAF21\_1 Acting](#research-paf21_1-acting) + - [Inputs](#inputs) + - [Curve Detection](#curve-detection) + - [Speed Control](#speed-control) + - [Steering Control](#steering-control) + - [Straight Trajectories](#straight-trajectories) + - [Detected Curves](#detected-curves) + ## Inputs - waypoints of the planned route diff --git a/doc/research/paf23/acting/paf21_2_and_pylot_acting.md b/doc/research/paf23/acting/paf21_2_and_pylot_acting.md index 8dfa05d6..d6690290 100644 --- a/doc/research/paf23/acting/paf21_2_and_pylot_acting.md +++ b/doc/research/paf23/acting/paf21_2_and_pylot_acting.md @@ -1,6 +1,34 @@ # PAF Research: Robert Fischer -## PAF22 +**Summary:** This page contains the research into the action component of the PAF21_2 group and pylot. + +- [PAF Research: Robert Fischer](#paf-research-robert-fischer) + - [Acting](#acting) + - [List of Inputs/Outputs](#list-of-inputsoutputs) + - [Challenges](#challenges) + - [PAF21\_2 Acting](#paf21_2-acting) + - [Standardroutine](#standardroutine) + - [Unstuck-Routine](#unstuck-routine) + - [Deadlock](#deadlock) + - [Verfolgung von Hindernissen](#verfolgung-von-hindernissen) + - [Messages](#messages) + - [StanleyController](#stanleycontroller) + - [PID Controller](#pid-controller) + - [Emergency Modus](#emergency-modus) + - [Bugabuses](#bugabuses) + - [Pylot Acting (Control)](#pylot-acting-control) + - [Control Types](#control-types) + - [PID](#pid) + - [MPC](#mpc) + - [Carla\_Autopilot](#carla_autopilot) + - [Basic Cotrol Code](#basic-cotrol-code) + - [**control\_eval\_operator.py**](#control_eval_operatorpy) + - [**messages.py**](#messagespy) + - [**pid.py**](#pidpy) + - [**pid\_control\_operator.py**](#pid_control_operatorpy) + - [**utils.py**](#utilspy) + - [MPC Control Code](#mpc-control-code) + ## Acting diff --git a/doc/research/paf23/leaderboard/changes_leaderboard.md b/doc/research/paf23/leaderboard/changes_leaderboard.md index 077cfb7b..aa7e6e91 100644 --- a/doc/research/paf23/leaderboard/changes_leaderboard.md +++ b/doc/research/paf23/leaderboard/changes_leaderboard.md @@ -2,15 +2,12 @@ **Summary:** New Features and changes made with the CARLA leaderboard-2.0 ---- - -## Author - -Samuel Kühnel - -## Date - -17.11.2023 +- [Overview leaderboard 2.0](#overview-leaderboard-20) + - [General Information](#general-information) + - [Submissions](#submissions) + - [New Features](#new-features) + - [Maps](#maps) + - [Scenarios and training database](#scenarios-and-training-database) ## General Information diff --git a/doc/research/paf23/perception/LIDAR_data.md b/doc/research/paf23/perception/LIDAR_data.md index ac62fa87..121b616b 100644 --- a/doc/research/paf23/perception/LIDAR_data.md +++ b/doc/research/paf23/perception/LIDAR_data.md @@ -1,6 +1,12 @@ # LIDAR-Data -This File discusses where the LIDAR-Data comes from, how its processed and how we could possibly use it. +**Summary:** This File discusses where the LIDAR-Data comes from, how its processed and how we could possibly use it. + +- [LIDAR-Data](#lidar-data) + - [Origin](#origin) + - [Processing](#processing) + - [Distance Calculation](#distance-calculation) + - [Open questions](#open-questions) ## Origin diff --git a/doc/research/paf23/perception/Research_PAF21-Perception.md b/doc/research/paf23/perception/Research_PAF21-Perception.md index f036100c..0e7fa9dc 100644 --- a/doc/research/paf23/perception/Research_PAF21-Perception.md +++ b/doc/research/paf23/perception/Research_PAF21-Perception.md @@ -1,8 +1,17 @@ # Sprint 0: Research Samuel Kühnel -## PAF 21-2 +**Summary:** This page contains the research into the perception component of the PAF21_2 group. -### Perception +- [Sprint 0: Research Samuel Kühnel](#sprint-0-research-samuel-kühnel) + - [Perception](#perception) + - [Obstacle detection](#obstacle-detection) + - [TrafficLightDetection](#trafficlightdetection) + - [Problems and solutions](#problems-and-solutions) + - [Resume](#resume) + - [Perception](#perception-1) + - [Planning](#planning) + +## Perception ### Obstacle detection @@ -26,7 +35,7 @@ - Yellow painted traffic lights distort traffic light phase detection → **Solution**: Filter out red and green sections beforehand using masks and convert remaining image to grayscale and add masks again. - **Problem without solution**: European traffic lights can sometimes not be recognized at the stop line. -## Resumee +## Resume ### Perception diff --git a/doc/research/paf23/perception/autoware-perception.md b/doc/research/paf23/perception/autoware-perception.md index 42fb6256..afe99950 100644 --- a/doc/research/paf23/perception/autoware-perception.md +++ b/doc/research/paf23/perception/autoware-perception.md @@ -1,5 +1,12 @@ # Autoware Perception +**Summary:** This page contains the research into the perception component of Autoware. + +- [Autoware Perception](#autoware-perception) + - [1.Architecture](#1architecture) + - [2.Detection Mechanisms](#2detection-mechanisms) + - [3. Conclusion](#3-conclusion) + ## 1.Architecture ![image](https://github.com/una-auxme/paf/assets/102369315/6b3fb964-e650-442a-a674-8e0471d931a9) diff --git a/doc/research/paf23/perception/paf_21_1_perception.md b/doc/research/paf23/perception/paf_21_1_perception.md index 4538d028..fa87bb5f 100644 --- a/doc/research/paf23/perception/paf_21_1_perception.md +++ b/doc/research/paf23/perception/paf_21_1_perception.md @@ -1,5 +1,15 @@ # Paf_21_1 - Perception +**Summary:** This page contains the research into the perception component of the PAF21_1 group. + +- [Paf\_21\_1 - Perception](#paf_21_1---perception) + - [1. Architecture](#1-architecture) + - [**Key Features**](#key-features) + - [2. Sensors](#2-sensors) + - [3. Object-Detection](#3-object-detection) + - [4. TrafficLight-Detection](#4-trafficlight-detection) + - [5. Conclusion](#5-conclusion) + ## 1. Architecture ![image](https://github.com/una-auxme/paf/assets/102369315/07328c78-83d7-425c-802e-8cc49430e6c1) diff --git a/doc/research/paf23/perception/pylot.md b/doc/research/paf23/perception/pylot.md index 3b82e29e..69620e39 100644 --- a/doc/research/paf23/perception/pylot.md +++ b/doc/research/paf23/perception/pylot.md @@ -1,10 +1,16 @@ # Pylot - Perception -**Authors:** Maximilian Jannack - -**Date:** 12.11.2023 - ---- +**Summary:** This page contains the research into the perception component of pylot. + +- [Pylot - Perception](#pylot---perception) + - [Detection](#detection) + - [Obstacle detection](#obstacle-detection) + - [Traffic light detection](#traffic-light-detection) + - [Lane detection](#lane-detection) + - [Obstacle Tracking](#obstacle-tracking) + - [Depth Estimation](#depth-estimation) + - [Segmentation](#segmentation) + - [Lidar](#lidar) ## [Detection](https://pylot.readthedocs.io/en/latest/perception.detection.html) diff --git a/doc/research/paf23/planning/Local_planning_for_first_milestone.md b/doc/research/paf23/planning/Local_planning_for_first_milestone.md index 48cbc1c2..a212dceb 100644 --- a/doc/research/paf23/planning/Local_planning_for_first_milestone.md +++ b/doc/research/paf23/planning/Local_planning_for_first_milestone.md @@ -2,15 +2,9 @@ **Summary:** This document states the implementation plan for the local planning. ---- - -## Author - -Julius Miller - -## Date - -03.12.2023 +- [Local Planning for first milestone](#local-planning-for-first-milestone) + - [Research](#research) + - [New Architecture for first milestone](#new-architecture-for-first-milestone) ## Research diff --git a/doc/research/paf23/planning/PlannedArchitecture.md b/doc/research/paf23/planning/PlannedArchitecture.md index 2bb98cac..e58578d2 100644 --- a/doc/research/paf23/planning/PlannedArchitecture.md +++ b/doc/research/paf23/planning/PlannedArchitecture.md @@ -1,6 +1,15 @@ # Planned Architecture -Provide an overview for a possible planning architecture consisting of Global Planner, Local Planner and Decision Making. +**Summary:** Provide an overview for a possible planning architecture consisting of Global Planner, Local Planner and Decision Making. + +- [Planned Architecture](#planned-architecture) + - [Overview](#overview) + - [Components](#components) + - [Global Plan](#global-plan) + - [Decision Making](#decision-making) + - [Local Plan](#local-plan) + - [Interfaces](#interfaces) + - [Prioritisation](#prioritisation) ## Overview diff --git a/doc/research/paf23/planning/Planning.md b/doc/research/paf23/planning/Planning.md index 0229ced2..28ca5fb0 100644 --- a/doc/research/paf23/planning/Planning.md +++ b/doc/research/paf23/planning/Planning.md @@ -1,5 +1,14 @@ # Planning +**Summary:** This page contains research into the planning component of the PAF21_2 group. + +- [Planning](#planning) + - [What is Planning?](#what-is-planning) + - [PAF21 - 2](#paf21---2) + - [Autoware](#autoware) + - [Resumee](#resumee) + - [Notes](#notes) + ## What is Planning? Finding the optimal path from start to goal, taking into account the static and dynamic conditions and transfering a suitable trajectory to the acting system diff --git a/doc/research/paf23/planning/PlanningPaf22.md b/doc/research/paf23/planning/PlanningPaf22.md index 85902213..d856b671 100644 --- a/doc/research/paf23/planning/PlanningPaf22.md +++ b/doc/research/paf23/planning/PlanningPaf22.md @@ -1,7 +1,21 @@ # Planning in PAF 22 +**Summary:** This page contains research into the planning component of the PAF22 group. + [(Github)](https://github.com/ll7/paf22) +- [Planning in PAF 22](#planning-in-paf-22) + - [Architecture](#architecture) + - [Preplanning](#preplanning) + - [Decision Making](#decision-making) + - [Local path planning](#local-path-planning) + - [Planning documentation](#planning-documentation) + - [Preplanning in code](#preplanning-in-code) + - [Global Plan in code](#global-plan-in-code) + - [Decision Making in code](#decision-making-in-code) + - [Conclusion](#conclusion) + - [What can be done next](#what-can-be-done-next) + ## Architecture ![overview](../../../assets/planning/overview.jpg) diff --git a/doc/research/paf23/planning/Research_Pylot_Planning.md b/doc/research/paf23/planning/Research_Pylot_Planning.md index e7277d52..30d046aa 100644 --- a/doc/research/paf23/planning/Research_Pylot_Planning.md +++ b/doc/research/paf23/planning/Research_Pylot_Planning.md @@ -1,6 +1,9 @@ # Sprint 0: Research Samuel Kühnel -## Pylot +**Summary:** This page contains the research into the planning component of pylot. + +- [Sprint 0: Research Samuel Kühnel](#sprint-0-research-samuel-kühnel) + - [Planning](#planning) ## Planning diff --git a/doc/research/paf23/planning/Testing_frenet_trajectory_planner.md b/doc/research/paf23/planning/Testing_frenet_trajectory_planner.md index b2581027..ccc40903 100644 --- a/doc/research/paf23/planning/Testing_frenet_trajectory_planner.md +++ b/doc/research/paf23/planning/Testing_frenet_trajectory_planner.md @@ -2,15 +2,11 @@ **Summary:** This document summarizes the Frenet Optimal Trajectory planner used in the pylot project ---- - -## Author - -Samuel Kühnel - -## Date - -15.01.2024 +- [Frenet Optimal Trajectory](#frenet-optimal-trajectory) + - [Setup](#setup) + - [Example Usage](#example-usage) + - [Inputs](#inputs) + - [Decision](#decision) ## Setup diff --git a/doc/research/paf23/planning/paf21-1.md b/doc/research/paf23/planning/paf21-1.md index a358a641..7d9f0d4d 100644 --- a/doc/research/paf23/planning/paf21-1.md +++ b/doc/research/paf23/planning/paf21-1.md @@ -1,19 +1,15 @@ # Planning in PAF21-1 -**Authors:** Maximilian Jannack - -**Date:** 12.11.2023 - ---- - -In PAF21-1, they divided the planning stage into two major components: +**Summary:** In PAF21-1, they divided the planning stage into two major components: - Global Planner - Local Planner A more detailed explanation is already present in the [basics](../paf22/basics.md#paf-2021-1) chapter. ---- +- [Planning in PAF21-1](#planning-in-paf21-1) + - [Global Planner](#global-planner) + - [Local Planner](#local-planner) ## Global Planner From 642bc6c86266f209d7c8c0b0461a75c55dda5909 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Thu, 10 Oct 2024 15:45:09 +0200 Subject: [PATCH 22/28] Updated execution & development notes --- README.md | 8 +++-- doc/development/README.md | 30 +++++++------------ doc/development/distributed_simulation.md | 9 ++++-- doc/development/documentation_requirements.md | 2 ++ doc/development/git_workflow.md | 11 ++----- doc/development/review_guideline.md | 7 ++++- doc/general/README.md | 5 ++-- build/README.md => doc/general/execution.md | 15 ++++++---- doc/general/installation.md | 11 +++---- 9 files changed, 50 insertions(+), 48 deletions(-) rename build/README.md => doc/general/execution.md (94%) diff --git a/README.md b/README.md index f217afd6..59863059 100644 --- a/README.md +++ b/README.md @@ -21,16 +21,20 @@ To be able to execute and develop the project, you need a Linux system equipped As the project is still in early development, these requirements are subject to change. -## Installation +## Getting started + +### Installation To run the project you have to install [docker](https://docs.docker.com/engine/install/) with NVIDIA GPU support, [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). `docker` and `nvidia-docker` are used to run the project in a containerized environment with GPU support. -More detailed instructions about setup and execution can be found [here](./doc/general/README.md). +More detailed instructions about the setup can be found [here](./doc/general/installation.md). ## Development +To get an overview of the current architecture of the agent you can look at the general documentation [here](./doc/general/architecture.md). The individual components are explained in the README files of their subfolders. + If you contribute to this project please read the guidelines first. They can be found [here](./doc/development/README.md). ## Research diff --git a/doc/development/README.md b/doc/development/README.md index 3cee2abe..75a99b4c 100644 --- a/doc/development/README.md +++ b/doc/development/README.md @@ -2,23 +2,21 @@ If you contribute to this project please read the following guidelines first: -1. [Start the docker container to simulate the car](../../build/README.md) -2. [Documentation Requirements](./documentation_requirements.md) -3. [Commit](./commit.md) -4. [Linting](./linting.md) -5. [Coding style](./coding_style.md) -6. [Git Style](./git_workflow.md) -7. [Reviewing](./review_guideline.md) -8. [Project management](./project_management.md) -9. Github actions +1. [Documentation Requirements](./documentation_requirements.md) +2. [Linting](./linting.md) +3. [Coding style](./coding_style.md) +4. [Git Style](./git_workflow.md) +5. [Reviewing](./review_guideline.md) +6. [Project management](./project_management.md) +7. Github actions 1. [linting action](./linter_action.md) 2. [build action](./build_action.md) -10. [Install python packages](./installing_python_packages.md) -11. [Discord Webhook Documentation](./discord_webhook.md) +8. [Install python packages](./installing_python_packages.md) +9. [Discord Webhook Documentation](./discord_webhook.md) ## Templates -Some templates are provided in [`doc/development/templates`](./templates). +Templates for documentation and code are provided in [`doc/development/templates`](./templates). ### [`template_class.py`](./templates/template_class.py) @@ -28,16 +26,8 @@ Use this class if you don't have much experience with python. If you just want t If you just want to copy an empty class use this class. -### [`template_component_readme.md`](./templates/template_component_readme.md) - -This template functions a template for who to describe a component. IT should be contained in every component as `README.md`. - ### [`template_wiki_page.md`](./templates/template_wiki_page.md) -This template functions a template for who to build knowledge articles for everyone to understand. The basic structure should be kept for all articles. This template further contains a cheat sheet with the most useful markdown syntax. - -### [`template_wiki_page_empty.md`](./templates/template_wiki_page_empty.md) - This template functions a template for who to build knowledge articles for everyone to understand. The basic structure should be kept for all articles. This template is empty and can be used straight forward. ## Discord Webhook diff --git a/doc/development/distributed_simulation.md b/doc/development/distributed_simulation.md index 0dcf038e..f09d1ecb 100644 --- a/doc/development/distributed_simulation.md +++ b/doc/development/distributed_simulation.md @@ -1,9 +1,9 @@ # Distributed Simulation -If you have not enough compute resources, start the `carla-simulator-server` on a remote machine and execute the agent on your local machine. -As far as we know, you need more than **10 GB of VRAM** to run the server and the agent on the same machine. +**Summary:** This page documents the distributed execution of the Carla simulator and the agent. - [Distributed Simulation](#distributed-simulation) + - [General](#general) - [Remote Machine Setup](#remote-machine-setup) - [Local Machine Setup](#local-machine-setup) - [Ensure similarity between normal docker-compose and distributed docker-compose files](#ensure-similarity-between-normal-docker-compose-and-distributed-docker-compose-files) @@ -11,6 +11,11 @@ As far as we know, you need more than **10 GB of VRAM** to run the server and th - [Start the agent on your local machine](#start-the-agent-on-your-local-machine) - [How do you know that you do not have enough compute resources?](#how-do-you-know-that-you-do-not-have-enough-compute-resources) +## General + +If you have not enough compute resources, start the `carla-simulator` on a remote machine and execute the agent on your local machine. +As far as we know, you need more than **10 GB of VRAM** to run the server and the agent on the same machine. + ## Remote Machine Setup - Gain `ssh` access to the remote machine. diff --git a/doc/development/documentation_requirements.md b/doc/development/documentation_requirements.md index f241d244..163db979 100644 --- a/doc/development/documentation_requirements.md +++ b/doc/development/documentation_requirements.md @@ -1,5 +1,7 @@ # Documentation Requirements +**Summary:** This document provides the guidelines for the documentation. + - [Documentation Requirements](#documentation-requirements) - [Readability and Maintainability](#readability-and-maintainability) - [Code Structure](#code-structure) diff --git a/doc/development/git_workflow.md b/doc/development/git_workflow.md index b72cb958..dcd9f0be 100644 --- a/doc/development/git_workflow.md +++ b/doc/development/git_workflow.md @@ -15,7 +15,6 @@ - [Branch Creation Settings](#branch-creation-settings) - [Creating a Branch in the Web Interface](#creating-a-branch-in-the-web-interface) - [Creating a Branch in VSCode](#creating-a-branch-in-vscode) - - [Commit messages](#commit-messages) - [Git commands cheat sheet](#git-commands-cheat-sheet) - [Sources](#sources) @@ -39,7 +38,7 @@ Two types of branches: ### Branch naming --- -Feature branch: issue number-description-of-issue (separator: '-') generated by Github automatically +Feature branch: issue number-description-of-issue (separator: '-') generated by Github automatically #### For example @@ -60,7 +59,7 @@ The `.vscode/settings.json` file in this repository contains settings that autom #### Creating a Branch in the Web Interface -To create a branch in the web interface, follow these steps: +To create a branch in the web interface, navigate to the corresponding issue and select the `Create Branch` option: ![Create Branch](../assets/github_create_a_branch.png) @@ -72,12 +71,6 @@ In Visual Studio Code, use the "GitHub.vscode-pull-request-github" extension. 2. These queries allow you to access different issues. 3. Click the button "->" to create a new branch from the selected issue, check out the branch, and assign the issue to yourself. -### Commit messages - ---- - -- proceed to [Commit Messages](./commit.md) - ### Git commands cheat sheet --- diff --git a/doc/development/review_guideline.md b/doc/development/review_guideline.md index 5dcd76ff..005a8650 100644 --- a/doc/development/review_guideline.md +++ b/doc/development/review_guideline.md @@ -7,6 +7,7 @@ - [Review Guidelines](#review-guidelines) - [How to review](#how-to-review) - [How to comment on a pull request](#how-to-comment-on-a-pull-request) + - [CodeRabbit](#coderabbit) - [Incorporating feedback](#incorporating-feedback) - [Responding to comments](#responding-to-comments) - [Applying suggested changes](#applying-suggested-changes) @@ -16,7 +17,7 @@ ## How to review -1. Select der PR you want to review on GitHub +1. Select the PR you want to review on GitHub ![img.png](../assets/PR_overview.png) 2. Go to Files Changed ![img.png](../assets/Files_Changed.png) @@ -51,6 +52,10 @@ - Be aware of negative bias with online communication. (If content is neutral, we assume the tone is negative.) Can you use positive language as opposed to neutral? - Use emoji to clarify tone. Compare “✨ ✨ Looks good 👍 ✨ ✨” to “Looks good.” +## CodeRabbit + +The repository also comes with CodeRabbit integration. This tool generates automatic reviews for a pull request. Although the proposed changes do not have to be incorporated, they can point to a better solution for parts of the implementation. + ## Incorporating feedback ### Responding to comments diff --git a/doc/general/README.md b/doc/general/README.md index 81105f81..cbf937f4 100644 --- a/doc/general/README.md +++ b/doc/general/README.md @@ -1,6 +1,7 @@ # General project setup -This Folder contains instruction how to execute the project and what it does. +This Folder contains instruction on installation, execution and architecture of the agent. 1. [Installation](./installation.md) -2. [Current architecture of the agent](./architecture.md) +2. [Execution](./execution.md) +3. [Current architecture of the agent](./architecture.md) diff --git a/build/README.md b/doc/general/execution.md similarity index 94% rename from build/README.md rename to doc/general/execution.md index 1f3ddeca..8b8d498f 100644 --- a/build/README.md +++ b/doc/general/execution.md @@ -1,14 +1,15 @@ -# Build Directory Documentation +# Execution -This document provides an overview of the build structure of the project, -detailing the purpose and usage of the various configuration files located in the `build` directory. +This document provides an overview of how to execute the project, +detailing the purpose and usage of the various configuration files located in the [build](../../build/) directory. The project utilizes Docker and Docker Compose to manage services and dependencies, facilitating both normal and distributed execution modes. ## Table of Contents -- [Build Directory Documentation](#build-directory-documentation) +- [Execution](#execution) - [Table of Contents](#table-of-contents) + - [Quick Start](#quick-start) - [Directory Structure](#directory-structure) - [Base Service Files](#base-service-files) - [`agent_service.yaml`](#agent_serviceyaml) @@ -27,6 +28,10 @@ facilitating both normal and distributed execution modes. - [Notes](#notes) - [Conclusion](#conclusion) +## Quick Start + +In order to start the default leaderboard execution simply navigate to the [build](../../build/) folder and select the `Compose up` option in the right-click menu of the `docker-compose.leaderboard.yaml` file. + ## Directory Structure The `build` directory contains the necessary configuration and setup files for building and running the project services. Below is an overview of the key files: @@ -44,7 +49,7 @@ The `build` directory contains the necessary configuration and setup files for b ## Base Service Files -The base service files define the configurations for individual services used in the project. These files are included or extended in the Docker Compose files to create different execution setups. +The base service files define the configurations for individual services used in the project. These files are included or extended in the Docker Compose files to create different execution setups and are not intended for standalone execution. ### `agent_service.yaml` diff --git a/doc/general/installation.md b/doc/general/installation.md index dd53ce5b..f5bc39a3 100644 --- a/doc/general/installation.md +++ b/doc/general/installation.md @@ -1,9 +1,8 @@ # 🛠️ Installation -**Summary:** This page explains the installation process for the project. +**Summary:** This page explains the installation process for the - [🛠️ Installation](#️-installation) - - [Installation](#installation) - [Docker with NVIDIA GPU support](#docker-with-nvidia-gpu-support) - [Docker](#docker) - [Allow non-root user to execute Docker commands](#allow-non-root-user-to-execute-docker-commands) @@ -16,13 +15,11 @@ To run the project you have to install [docker](https://docs.docker.com/engine/i For development, we recommend Visual Studio Code with the plugins that are recommended inside the `.vscode` folder. -## Installation +## Docker with NVIDIA GPU support If not yet installed first install docker as described in section [Docker with NVIDIA GPU support](#docker-with-nvidia-gpu-support). -## Docker with NVIDIA GPU support - -For this installation, it's easiest to follow the guide in the [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). +For NVIDIA GPU support it's easiest to follow the guide in the [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker). For simplicity, we list the necessary steps here: @@ -51,7 +48,7 @@ After this, _restart_ your system to propagate the group changes. ### NVIDIA Container toolkit -Setup the package repository and the GPG key: +Set up the package repository and the GPG key: ```shell distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ From 5082d81fc01d8d87ae68bb4305e85f5d8b76a819 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Thu, 10 Oct 2024 15:48:57 +0200 Subject: [PATCH 23/28] Removed blank line from markdown --- doc/research/paf23/acting/paf21_2_and_pylot_acting.md | 1 - 1 file changed, 1 deletion(-) diff --git a/doc/research/paf23/acting/paf21_2_and_pylot_acting.md b/doc/research/paf23/acting/paf21_2_and_pylot_acting.md index d6690290..c071338b 100644 --- a/doc/research/paf23/acting/paf21_2_and_pylot_acting.md +++ b/doc/research/paf23/acting/paf21_2_and_pylot_acting.md @@ -29,7 +29,6 @@ - [**utils.py**](#utilspy) - [MPC Control Code](#mpc-control-code) - ## Acting - Longitudinal control From c5c3e9ff49f56bc1a01a51ab7ff50f08b997d0c0 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Mon, 14 Oct 2024 17:06:46 +0200 Subject: [PATCH 24/28] Added black action & refactored actions --- .github/workflows/add-to-project.yml | 2 +- .github/workflows/build.yml | 90 ++-------------------------- .github/workflows/drive.yaml | 63 +++++++++++++++++++ .github/workflows/format.yaml | 21 +++++++ .github/workflows/linter.yml | 18 +++--- 5 files changed, 100 insertions(+), 94 deletions(-) create mode 100644 .github/workflows/drive.yaml create mode 100644 .github/workflows/format.yaml diff --git a/.github/workflows/add-to-project.yml b/.github/workflows/add-to-project.yml index dcced1c4..52f9bcf6 100644 --- a/.github/workflows/add-to-project.yml +++ b/.github/workflows/add-to-project.yml @@ -1,4 +1,4 @@ -name: Add bugs to bugs project +name: Add issue to project on: issues: diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index fa07e68f..ac3b7276 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -1,9 +1,10 @@ -name: Build, publish and run tests +name: Build and push image on: - push: - branches: [ 'main' ] - pull_request: + workflow_run: + workflows: ["Check code format", "Linter markdown and code"] + types: + - completed env: REGISTRY: ghcr.io @@ -38,19 +39,6 @@ jobs: username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - - name: Bump version and push tag - # only run on push to main - if: github.event_name == 'push' && github.ref == 'refs/heads/main' - id: tag - uses: mathieudutour/github-tag-action@v6.1 - with: - github_token: ${{ secrets.GITHUB_TOKEN }} - release_branches: main - - - name: Get commit hash - id: hash - run: echo "hash=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT - - name: Build and push Docker image id: build uses: docker/build-push-action@v3 @@ -59,72 +47,6 @@ jobs: file: ./build/docker/build/Dockerfile push: true # tag 'latest' and version on push to main, otherwise use the commit hash - tags: | - ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.tag.outputs.new_version == '' && steps.hash.outputs.hash || steps.tag.outputs.new_version }} - ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.event_name == 'push' && github.ref == 'refs/heads/main' && 'latest' || steps.hash.outputs.hash }} + tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest cache-from: type=gha cache-to: type=gha,mode=max - - - name: Output version - id: version - # take either the created tag or the commit hash - run: echo "version=${{ steps.tag.outputs.new_version == '' && steps.hash.outputs.hash || steps.tag.outputs.new_version }}" >> $GITHUB_OUTPUT - drive: - runs-on: self-hosted - needs: build-and-push-image - # run only on pull request for now - if: github.event_name == 'pull_request' - env: - AGENT_VERSION: ${{ needs.build-and-push-image.outputs.version }} - COMPOSE_FILE: ./build/docker-compose.cicd.yaml - steps: - - name: Checkout repository - uses: actions/checkout@v3 - - name: Print environment variables (DEBUG) - run: | - echo "AGENT_VERSION=${AGENT_VERSION}" - echo "COMPOSE_FILE=${COMPOSE_FILE}" - - name: Get commit hash - id: hash - run: echo "hash=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT - - name: Set AGENT_VERSION from hash (workaround) - run: echo "AGENT_VERSION=${{ steps.hash.outputs.hash }}" >> $GITHUB_ENV - - name: Run docker-compose - run: docker compose up --quiet-pull --exit-code-from agent - - name: Copy results - run: docker compose cp agent:/tmp/simulation_results.json . - - name: Stop docker-compose - # always run this step, to clean up even on error - if: always() - run: docker compose down -v - # add rendered JSON as comment to the pull request - - name: Add simulation results as comment - if: github.event_name == 'pull_request' - uses: actions/github-script@v6 - with: - github-token: ${{ secrets.GITHUB_TOKEN }} - # this script reads the simulation_results.json and creates a comment on the pull request with the results. - script: | - const fs = require('fs'); - // read the simulation results - const results = fs.readFileSync('./simulation_results.json', 'utf8'); - let resultsJson = JSON.parse(results); - // create a markdown table of the results - let resultsTable = resultsJson.values.map((values, i) => { - return `| ${resultsJson.labels[i]} | ${values} |`; - }); - // create a markdown table header - let resultsTableHeader = `| Metric | Value |`; - // create a markdown table divider - let resultsTableDivider = `| --- | --- |`; - // add everything to the resultsTable - resultsTable = resultsTableHeader + '\n' + resultsTableDivider + '\n' + resultsTable.join('\n'); - // add the results as a comment to the pull request - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: "## Simulation results\n" + resultsTable - }); - - name: Prune all images older than 30 days from self-hosted runner - run: docker image prune -a --force --filter "until=720h" diff --git a/.github/workflows/drive.yaml b/.github/workflows/drive.yaml new file mode 100644 index 00000000..629acba7 --- /dev/null +++ b/.github/workflows/drive.yaml @@ -0,0 +1,63 @@ +name: Evaluate agent + +on: + workflow_run: + workflows: ["Build and push image"] + types: + - completed + +jobs: + drive: + runs-on: self-hosted + needs: build-and-push-image + # run only on pull request for now + if: github.event_name == 'pull_request' + env: + AGENT_VERSION: latest + COMPOSE_FILE: ./build/docker-compose.cicd.yaml + steps: + - name: Checkout repository + uses: actions/checkout@v3 + - name: Print environment variables (DEBUG) + run: | + echo "AGENT_VERSION=${AGENT_VERSION}" + echo "COMPOSE_FILE=${COMPOSE_FILE}" + - name: Run docker-compose + run: docker compose up --quiet-pull --exit-code-from agent + - name: Copy results + run: docker compose cp agent:/tmp/simulation_results.json . + - name: Stop docker-compose + # always run this step, to clean up even on error + if: always() + run: docker compose down -v + # add rendered JSON as comment to the pull request + - name: Add simulation results as comment + if: github.event_name == 'pull_request' + uses: actions/github-script@v6 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + # this script reads the simulation_results.json and creates a comment on the pull request with the results. + script: | + const fs = require('fs'); + // read the simulation results + const results = fs.readFileSync('./simulation_results.json', 'utf8'); + let resultsJson = JSON.parse(results); + // create a markdown table of the results + let resultsTable = resultsJson.values.map((values, i) => { + return `| ${resultsJson.labels[i]} | ${values} |`; + }); + // create a markdown table header + let resultsTableHeader = `| Metric | Value |`; + // create a markdown table divider + let resultsTableDivider = `| --- | --- |`; + // add everything to the resultsTable + resultsTable = resultsTableHeader + '\n' + resultsTableDivider + '\n' + resultsTable.join('\n'); + // add the results as a comment to the pull request + github.rest.issues.createComment({ + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + body: "## Simulation results\n" + resultsTable + }); + - name: Prune all images older than 30 days from self-hosted runner + run: docker image prune -a --force --filter "until=720h" \ No newline at end of file diff --git a/.github/workflows/format.yaml b/.github/workflows/format.yaml new file mode 100644 index 00000000..58a1947e --- /dev/null +++ b/.github/workflows/format.yaml @@ -0,0 +1,21 @@ +name: Check code format + +on: + pull_request: + branches: + - "main" + +jobs: + format: + name: Check code files format + runs-on: ubuntu-latest + steps: + - name: Check out the repo + uses: actions/checkout@v2 + # Execute the python formatter + - name: Run the python formatter + uses: addnab/docker-run-action@v3 + with: + image: pyfound/black + options: -v ${{ github.workspace}}:/apps + run: black --check ./apps/ diff --git a/.github/workflows/linter.yml b/.github/workflows/linter.yml index d20a1b54..630ad725 100644 --- a/.github/workflows/linter.yml +++ b/.github/workflows/linter.yml @@ -1,6 +1,9 @@ -name: linter +name: Linter markdown and code -on: pull_request +on: + pull_request: + branches: + - "main" jobs: linter: @@ -13,16 +16,13 @@ jobs: - name: Run the markdown linter uses: addnab/docker-run-action@v3 with: - image: peterdavehello/markdownlint:0.32.2 - options: -v ${{ github.workspace }}:/md - run: | - markdownlint . + image: peterdavehello/markdownlint:0.32.2 + options: -v ${{ github.workspace }}:/md + run: markdownlint . # Execute the python linter (executes even if the previous step failed) - name: Run the python linter - if: always() uses: addnab/docker-run-action@v3 with: image: alpine/flake8 options: -v ${{ github.workspace }}:/apps - run: | - flake8 code + run: flake8 . From 60501bb685d596c1732080edcf19f29040c75fa5 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Mon, 14 Oct 2024 17:07:31 +0200 Subject: [PATCH 25/28] Fixed issues with file permissions & black linter --- .vscode/settings.json | 2 +- build/agent_service.yaml | 3 --- build/docker-compose.dev.yaml | 8 ++++---- build/docker-compose.devroute-distributed.yaml | 2 +- build/docker-compose.devroute.yaml | 2 +- build/docker-compose.leaderboard-distributed.yaml | 2 +- build/docker-compose.leaderboard.yaml | 2 +- build/docker-compose.linter.yaml | 6 ++++++ build/docker/agent/Dockerfile | 6 +++--- 9 files changed, 18 insertions(+), 15 deletions(-) diff --git a/.vscode/settings.json b/.vscode/settings.json index 8d42024f..60b7e5d1 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -23,7 +23,7 @@ "docker.commands.composeUp": [ { "label": "Compose Up", - "template": "xhost +local: && ${composeCommand} ${configurationFile} up" + "template": "xhost +local: && USERNAME=$(whoami) USER_UID=$(id -u) USER_GID=$(id -g) ${composeCommand} ${configurationFile} up" } ], "workbench.iconTheme": "vscode-icons" diff --git a/build/agent_service.yaml b/build/agent_service.yaml index 6362983b..8266fe9d 100644 --- a/build/agent_service.yaml +++ b/build/agent_service.yaml @@ -2,9 +2,6 @@ services: agent: build: dockerfile: build/docker/agent/Dockerfile - args: - - USER_UID=${DOCKER_HOST_UNIX_UID:-1000} - - USER_GID=${DOCKER_HOST_UNIX_GID:-1000} context: ../ init: true tty: true diff --git a/build/docker-compose.dev.yaml b/build/docker-compose.dev.yaml index 0e3fe39a..95d06f11 100644 --- a/build/docker-compose.dev.yaml +++ b/build/docker-compose.dev.yaml @@ -6,8 +6,9 @@ services: dockerfile: build/docker/agent-dev/Dockerfile context: ../ args: - - USER_UID=${DOCKER_HOST_UNIX_UID:-1000} - - USER_GID=${DOCKER_HOST_UNIX_GID:-1000} + USERNAME: ${USERNAME} + USER_UID: ${USER_UID} + USER_GID: ${USER_GID} init: true tty: true shm_size: 2gb @@ -24,5 +25,4 @@ services: network_mode: host privileged: true entrypoint: ["/dev_entrypoint.sh"] - command: bash - \ No newline at end of file + command: bash -c "sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && bash" diff --git a/build/docker-compose.devroute-distributed.yaml b/build/docker-compose.devroute-distributed.yaml index e8ed7e9f..cc9f429c 100644 --- a/build/docker-compose.devroute-distributed.yaml +++ b/build/docker-compose.devroute-distributed.yaml @@ -8,7 +8,7 @@ services: extends: file: agent_service.yaml service: agent - command: bash -c "sleep 10 && roslaunch agent/launch/dev.launch" + command: bash -c "sleep 10 && sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && roslaunch agent/launch/dev.launch" environment: - CARLA_SIM_HOST= - ROUTE=/workspace/code/routes/routes_simple.xml diff --git a/build/docker-compose.devroute.yaml b/build/docker-compose.devroute.yaml index 6f04d601..4510f0a2 100644 --- a/build/docker-compose.devroute.yaml +++ b/build/docker-compose.devroute.yaml @@ -12,4 +12,4 @@ services: service: agent environment: - ROUTE=/workspace/code/routes/routes_simple.xml - command: bash -c "sleep 10 && sudo chown -R carla:carla ../code/ && sudo chmod -R a+w ../code/ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" + command: bash -c "sleep 10 && sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" diff --git a/build/docker-compose.leaderboard-distributed.yaml b/build/docker-compose.leaderboard-distributed.yaml index 464c1aa6..1bcb9949 100644 --- a/build/docker-compose.leaderboard-distributed.yaml +++ b/build/docker-compose.leaderboard-distributed.yaml @@ -7,7 +7,7 @@ services: extends: file: agent_service.yaml service: agent - command: bash -c "sleep 10 && sudo chown -R carla:carla ../code/ && sudo chmod -R a+w ../code/ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" + command: bash -c "sleep 10 && sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" environment: - CARLA_SIM_HOST= diff --git a/build/docker-compose.leaderboard.yaml b/build/docker-compose.leaderboard.yaml index 93a669fc..32fc98fc 100644 --- a/build/docker-compose.leaderboard.yaml +++ b/build/docker-compose.leaderboard.yaml @@ -8,4 +8,4 @@ services: extends: file: agent_service.yaml service: agent - command: bash -c "sleep 10 && sudo chown -R carla:carla ../code/ && sudo chmod -R a+w ../code/ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" + command: bash -c "sleep 10 && sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=0 --routes=$${ROUTE} --agent=/workspace/code/agent/src/agent/agent.py --host=$${CARLA_SIM_HOST} --track=MAP" diff --git a/build/docker-compose.linter.yaml b/build/docker-compose.linter.yaml index 0816aaab..d184141d 100644 --- a/build/docker-compose.linter.yaml +++ b/build/docker-compose.linter.yaml @@ -5,6 +5,12 @@ services: volumes: - ../:/apps + black: + image: pyfound/black + command: black --check ./apps/ + volumes: + - ../:/apps + mdlint: image: peterdavehello/markdownlint:0.32.2 command: markdownlint . diff --git a/build/docker/agent/Dockerfile b/build/docker/agent/Dockerfile index 1917b372..7f7d60d7 100644 --- a/build/docker/agent/Dockerfile +++ b/build/docker/agent/Dockerfile @@ -12,9 +12,9 @@ FROM osrf/ros:noetic-desktop-full-focal # COPY --from=carla /home/carla/PythonAPI /opt/carla/PythonAPI -ARG USERNAME=carla -ARG USER_UID=999 -ARG USER_GID=$USER_UID +ARG USERNAME +ARG USER_UID +ARG USER_GID ARG DEBIAN_FRONTEND=noninteractive # install rendering dependencies for rviz / rqt From e97dae5a61f1bc6fb9f59d12cafd7ff9300733b9 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 15 Oct 2024 08:25:40 +0200 Subject: [PATCH 26/28] Added black formatter to project --- .editorconfig | 2 +- .flake8 | 8 ++----- .vscode/extensions.json | 3 ++- .vscode/settings.json | 3 ++- build/docker/agent/Dockerfile | 42 +++++++++++++++++------------------ 5 files changed, 28 insertions(+), 30 deletions(-) diff --git a/.editorconfig b/.editorconfig index 9dd330ea..8db9e725 100644 --- a/.editorconfig +++ b/.editorconfig @@ -10,7 +10,7 @@ insert_final_newline = true trim_trailing_whitespace = true [*.py] -max_line_length = 80 +max_line_length = 88 indent_size = 4 [*.md] diff --git a/.flake8 b/.flake8 index 042f2345..6c032f36 100644 --- a/.flake8 +++ b/.flake8 @@ -1,7 +1,3 @@ [flake8] -exclude= code/planning/src/behavior_agent/behavior_tree.py, - code/planning/src/behavior_agent/behaviours/__init__.py, - code/planning/src/behavior_agent/behaviours, - code/planning/__init__.py, - doc/development/templates/template_class_no_comments.py, - doc/development/templates/template_class.py \ No newline at end of file +max-line-length = 88 +extend-ignore = E203,E701 \ No newline at end of file diff --git a/.vscode/extensions.json b/.vscode/extensions.json index 6e4fc554..c27bbad5 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -11,6 +11,7 @@ "bierner.markdown-mermaid", "richardkotze.git-mob", "ms-vscode-remote.remote-containers", - "valentjn.vscode-ltex" + "valentjn.vscode-ltex", + "ms-python.black-formatter" ] } \ No newline at end of file diff --git a/.vscode/settings.json b/.vscode/settings.json index 60b7e5d1..817fbe31 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -26,5 +26,6 @@ "template": "xhost +local: && USERNAME=$(whoami) USER_UID=$(id -u) USER_GID=$(id -g) ${composeCommand} ${configurationFile} up" } ], - "workbench.iconTheme": "vscode-icons" + "workbench.iconTheme": "vscode-icons", + "editor.formatOnSave": true } \ No newline at end of file diff --git a/build/docker/agent/Dockerfile b/build/docker/agent/Dockerfile index 7f7d60d7..c8c05ceb 100644 --- a/build/docker/agent/Dockerfile +++ b/build/docker/agent/Dockerfile @@ -19,7 +19,7 @@ ARG DEBIAN_FRONTEND=noninteractive # install rendering dependencies for rviz / rqt RUN apt-get update \ - && apt-get install -y -qq --no-install-recommends \ + && apt-get install -y -qq --no-install-recommends \ libxext6 libx11-6 libglvnd0 libgl1 \ libglx0 libegl1 freeglut3-dev apt-utils \ fprintd libfprint-2-2 libpam-fprintd @@ -30,10 +30,10 @@ RUN apt-get install wget unzip # Download Carla PythonAPI (alternative to getting it from the Carla-Image, which is commented out above) # If the PythonAPI/Carla version changes, either update the link, or refer to the comment at the top of this file. RUN wget https://github.com/una-auxme/paf/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ - && unzip PythonAPI.zip \ - && rm PythonAPI.zip \ - && mkdir -p /opt/carla \ - && mv PythonAPI /opt/carla/PythonAPI + && unzip PythonAPI.zip \ + && rm PythonAPI.zip \ + && mkdir -p /opt/carla \ + && mv PythonAPI /opt/carla/PythonAPI # Workaround/fix for using dpkg for cuda installation # Only required for the lab PCs @@ -65,12 +65,12 @@ ENV PYTHONPATH=$PYTHONPATH:/opt/carla/PythonAPI/carla/dist/carla-0.9.14-py3.7-li # install mlocate, pip, wget, git and some ROS dependencies for building the CARLA ROS bridge RUN apt-get update && apt-get install -y \ - mlocate python3-pip wget git python-is-python3 \ - ros-noetic-ackermann-msgs ros-noetic-derived-object-msgs \ - ros-noetic-carla-msgs ros-noetic-pcl-conversions \ - ros-noetic-rviz ros-noetic-rqt ros-noetic-pcl-ros ros-noetic-rosbridge-suite ros-noetic-rosbridge-server \ - ros-noetic-robot-pose-ekf ros-noetic-ros-numpy \ - ros-noetic-py-trees-ros ros-noetic-rqt-py-trees ros-noetic-rqt-reconfigure + mlocate python3-pip wget git python-is-python3 \ + ros-noetic-ackermann-msgs ros-noetic-derived-object-msgs \ + ros-noetic-carla-msgs ros-noetic-pcl-conversions \ + ros-noetic-rviz ros-noetic-rqt ros-noetic-pcl-ros ros-noetic-rosbridge-suite ros-noetic-rosbridge-server \ + ros-noetic-robot-pose-ekf ros-noetic-ros-numpy \ + ros-noetic-py-trees-ros ros-noetic-rqt-py-trees ros-noetic-rqt-reconfigure SHELL ["/bin/bash", "-c"] @@ -105,9 +105,9 @@ ENV CARLA_ROS_BRIDGE_ROOT=/catkin_ws/src/ros-bridge # (as we're not running as root, pip installs into ~/.local/bin) ENV PATH=$PATH:/home/$USERNAME/.local/bin -# install simple_pid +# install pip packages RUN python -m pip install pip --upgrade \ - && python -m pip install simple_pid pygame transformations roslibpy lxml + && python -m pip install simple_pid pygame transformations roslibpy lxml black # install the scenario runner from GitHub leaderboard-2.0 branch ENV CARLA_ROOT=/opt/carla @@ -179,11 +179,11 @@ RUN echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc ENTRYPOINT ["/entrypoint.sh"] CMD ["bash", "-c", "sleep 10 && \ -python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py \ ---debug=${DEBUG_CHALLENGE} \ ---repetitions=${REPETITIONS} \ ---checkpoint=${CHECKPOINT_ENDPOINT} \ ---track=${CHALLENGE_TRACK} \ ---agent=${TEAM_AGENT} \ ---routes=${ROUTES} \ ---host=${CARLA_SIM_HOST}"] + python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py \ + --debug=${DEBUG_CHALLENGE} \ + --repetitions=${REPETITIONS} \ + --checkpoint=${CHECKPOINT_ENDPOINT} \ + --track=${CHALLENGE_TRACK} \ + --agent=${TEAM_AGENT} \ + --routes=${ROUTES} \ + --host=${CARLA_SIM_HOST}"] From 5d522ebbd3919365f57c3a4fbb94325bb8827b3d Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 15 Oct 2024 06:31:01 +0000 Subject: [PATCH 27/28] Applied black formatting --- code/acting/setup.py | 3 +- code/acting/src/acting/Acting_Debug_Node.py | 167 +++-- code/acting/src/acting/MainFramePublisher.py | 54 +- code/acting/src/acting/helper_functions.py | 61 +- .../src/acting/pure_pursuit_controller.py | 96 +-- code/acting/src/acting/stanley_controller.py | 103 +-- code/acting/src/acting/vehicle_controller.py | 98 +-- code/acting/src/acting/velocity_controller.py | 51 +- code/agent/setup.py | 3 +- code/agent/src/agent/agent.py | 146 ++-- code/mock/setup.py | 3 +- code/mock/src/mock_intersection_clear.py | 20 +- code/mock/src/mock_stop_sign.py | 21 +- code/mock/src/mock_traffic_light.py | 21 +- code/perception/setup.py | 3 +- .../src/coordinate_transformation.py | 13 +- code/perception/src/dataset_converter.py | 57 +- code/perception/src/dataset_generator.py | 168 +++-- .../Position_Heading_Datasets/viz.py | 396 ++++++----- .../src/global_plan_distance_publisher.py | 47 +- code/perception/src/kalman_filter.py | 118 ++-- code/perception/src/lidar_distance.py | 157 +++-- code/perception/src/lidar_filter_utility.py | 21 +- .../src/position_heading_filter_debug_node.py | 272 ++++---- .../src/position_heading_publisher_node.py | 79 ++- .../src/data_generation/weights_organizer.py | 22 +- .../src/traffic_light_config.py | 2 +- .../classification_model.py | 26 +- .../traffic_light_inference.py | 57 +- .../traffic_light_training.py | 107 +-- .../src/traffic_light_detection/transforms.py | 4 +- code/perception/src/traffic_light_node.py | 25 +- code/perception/src/vision_node.py | 242 +++---- code/planning/setup.py | 3 +- .../src/behavior_agent/behavior_tree.py | 165 +++-- .../behaviours/behavior_speed.py | 1 - .../behavior_agent/behaviours/intersection.py | 133 ++-- .../behavior_agent/behaviours/lane_change.py | 69 +- .../behavior_agent/behaviours/maneuvers.py | 157 +++-- .../src/behavior_agent/behaviours/meta.py | 26 +- .../src/behavior_agent/behaviours/overtake.py | 70 +- .../behaviours/road_features.py | 66 +- .../behaviours/topics2blackboard.py | 124 ++-- .../behaviours/traffic_objects.py | 35 +- .../src/global_planner/dev_global_route.py | 79 ++- .../src/global_planner/global_planner.py | 136 ++-- .../src/global_planner/help_functions.py | 86 +-- .../global_planner/preplanning_trajectory.py | 636 ++++++++++-------- code/planning/src/local_planner/ACC.py | 77 ++- .../src/local_planner/collision_check.py | 84 +-- .../src/local_planner/motion_planning.py | 159 +++-- code/planning/src/local_planner/utils.py | 52 +- code/test-route/src/test_route.py | 82 ++- doc/development/templates/template_class.py | 1 + .../templates/template_class_no_comments.py | 8 +- .../globals.py | 18 +- .../object-detection-model_evaluation/pt.py | 56 +- .../pylot.py | 70 +- .../object-detection-model_evaluation/yolo.py | 39 +- doc/research/paf23/planning/test_traj.py | 58 +- 60 files changed, 2825 insertions(+), 2328 deletions(-) diff --git a/code/acting/setup.py b/code/acting/setup.py index e2665b1f..773f1357 100644 --- a/code/acting/setup.py +++ b/code/acting/setup.py @@ -2,6 +2,5 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup -setup_args = generate_distutils_setup(packages=['acting'], - package_dir={'': 'src'}) +setup_args = generate_distutils_setup(packages=["acting"], package_dir={"": "src"}) setup(**setup_args) diff --git a/code/acting/src/acting/Acting_Debug_Node.py b/code/acting/src/acting/Acting_Debug_Node.py index 99839e18..b3289747 100755 --- a/code/acting/src/acting/Acting_Debug_Node.py +++ b/code/acting/src/acting/Acting_Debug_Node.py @@ -71,35 +71,33 @@ def __init__(self): Constructor of the class :return: """ - super(Acting_Debug_Node, self).__init__('dummy_trajectory_pub') - self.loginfo('Acting_Debug_Node node started') - self.role_name = self.get_param('role_name', 'ego_vehicle') - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) + super(Acting_Debug_Node, self).__init__("dummy_trajectory_pub") + self.loginfo("Acting_Debug_Node node started") + self.role_name = self.get_param("role_name", "ego_vehicle") + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) # Publisher for Dummy Trajectory self.trajectory_pub: Publisher = self.new_publisher( - Path, - "/paf/" + self.role_name + "/trajectory", - qos_profile=1) + Path, "/paf/" + self.role_name + "/trajectory", qos_profile=1 + ) # Publisher for Dummy Velocity self.velocity_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/target_velocity", - qos_profile=1) + Float32, f"/paf/{self.role_name}/target_velocity", qos_profile=1 + ) # PurePursuit: Publisher for Dummy PP-Steer self.pure_pursuit_steer_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/pure_pursuit_steer", - qos_profile=1) + Float32, f"/paf/{self.role_name}/pure_pursuit_steer", qos_profile=1 + ) # Subscriber of current_pos, used for Steering Debugging self.current_pos_sub: Subscriber = self.new_subscription( msg_type=PoseStamped, topic="/paf/" + self.role_name + "/current_pos", callback=self.__current_position_callback, - qos_profile=1) + qos_profile=1, + ) # ---> EVALUATION/TUNING: Subscribers for plotting # Subscriber for target_velocity for plotting @@ -107,55 +105,61 @@ def __init__(self): Float32, f"/paf/{self.role_name}/target_velocity", self.__get_target_velocity, - qos_profile=1) + qos_profile=1, + ) # Subscriber for current_heading self.heading_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.__get_heading, - qos_profile=1) + qos_profile=1, + ) # Subscriber for current_velocity self.current_velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_current_velocity, - qos_profile=1) + qos_profile=1, + ) # Subscriber for current_throttle self.current_throttle_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/throttle", self.__get_throttle, - qos_profile=1) + qos_profile=1, + ) # Subscriber for Stanley_steer self.stanley_steer_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/stanley_steer", self.__get_stanley_steer, - qos_profile=1) + qos_profile=1, + ) # Subscriber for PurePursuit_steer self.pure_pursuit_steer_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/pure_pursuit_steer", self.__get_purepursuit_steer, - qos_profile=1) + qos_profile=1, + ) # Subscriber for vehicle_steer self.vehicle_steer_sub: Subscriber = self.new_subscription( CarlaEgoVehicleControl, - f'/carla/{self.role_name}/vehicle_control_cmd', + f"/carla/{self.role_name}/vehicle_control_cmd", self.__get_vehicle_steer, - qos_profile=10) + qos_profile=10, + ) # Publisher for emergency brake testing self.emergency_pub: Publisher = self.new_publisher( - Bool, - f"/paf/{self.role_name}/emergency", - qos_profile=1) + Bool, f"/paf/{self.role_name}/emergency", qos_profile=1 + ) # Initialize all needed "global" variables here self.current_trajectory = [] @@ -181,16 +185,12 @@ def __init__(self): # Spawncoords at the simulationstart startx = 984.5 starty = -5442.0 - if (TRAJECTORY_TYPE == 0): # Straight trajectory - self.current_trajectory = [ - (startx, starty), - (startx, starty-200) - ] + if TRAJECTORY_TYPE == 0: # Straight trajectory + self.current_trajectory = [(startx, starty), (startx, starty - 200)] - elif (TRAJECTORY_TYPE == 1): # straight into 90° Curve + elif TRAJECTORY_TYPE == 1: # straight into 90° Curve self.current_trajectory = [ (984.5, -5442.0), - (984.5, -5563.5), (985.0, -5573.2), (986.3, -5576.5), @@ -198,12 +198,11 @@ def __init__(self): (988.7, -5579.0), (990.5, -5579.8), (1000.0, -5580.2), - (1040.0, -5580.0), - (1070.0, -5580.0) + (1070.0, -5580.0), ] - elif (TRAJECTORY_TYPE == 2): # Sinewave Serpentines trajectory + elif TRAJECTORY_TYPE == 2: # Sinewave Serpentines trajectory # Generate a sine-wave with the global Constants to # automatically generate a trajectory with serpentine waves cycles = 4 # how many sine cycles @@ -224,53 +223,50 @@ def __init__(self): traj_y -= 2 trajectory_wave.append((traj_x, traj_y)) # back to the middle of the road - trajectory_wave.append((startx, traj_y-2)) + trajectory_wave.append((startx, traj_y - 2)) # add a long straight path after the serpentines - trajectory_wave.append((startx, starty-200)) + trajectory_wave.append((startx, starty - 200)) self.current_trajectory = trajectory_wave - elif (TRAJECTORY_TYPE == 3): # 2 Lane Switches + elif TRAJECTORY_TYPE == 3: # 2 Lane Switches self.current_trajectory = [ (startx, starty), - (startx-0.5, starty-10), - (startx-0.5, starty-20), - - (startx-0.4, starty-21), - (startx-0.3, starty-22), - (startx-0.2, starty-23), - (startx-0.1, starty-24), - (startx, starty-25), - (startx+0.1, starty-26), - (startx+0.2, starty-27), - (startx+0.3, starty-28), - (startx+0.4, starty-29), - (startx+0.5, starty-30), - (startx+0.6, starty-31), - (startx+0.7, starty-32), - (startx+0.8, starty-33), - (startx+0.9, starty-34), - (startx+1.0, starty-35), - (startx+1.0, starty-50), - - (startx+1.0, starty-51), - (startx+0.9, starty-52), - (startx+0.8, starty-53), - (startx+0.7, starty-54), - (startx+0.6, starty-55), - (startx+0.5, starty-56), - (startx+0.4, starty-57), - (startx+0.3, starty-58), - (startx+0.2, starty-59), - (startx+0.1, starty-60), - (startx, starty-61), - (startx-0.1, starty-62), - (startx-0.2, starty-63), - (startx-0.3, starty-64), - (startx-0.4, starty-65), - (startx-0.5, starty-66), - - (startx-0.5, starty-100), - ] + (startx - 0.5, starty - 10), + (startx - 0.5, starty - 20), + (startx - 0.4, starty - 21), + (startx - 0.3, starty - 22), + (startx - 0.2, starty - 23), + (startx - 0.1, starty - 24), + (startx, starty - 25), + (startx + 0.1, starty - 26), + (startx + 0.2, starty - 27), + (startx + 0.3, starty - 28), + (startx + 0.4, starty - 29), + (startx + 0.5, starty - 30), + (startx + 0.6, starty - 31), + (startx + 0.7, starty - 32), + (startx + 0.8, starty - 33), + (startx + 0.9, starty - 34), + (startx + 1.0, starty - 35), + (startx + 1.0, starty - 50), + (startx + 1.0, starty - 51), + (startx + 0.9, starty - 52), + (startx + 0.8, starty - 53), + (startx + 0.7, starty - 54), + (startx + 0.6, starty - 55), + (startx + 0.5, starty - 56), + (startx + 0.4, starty - 57), + (startx + 0.3, starty - 58), + (startx + 0.2, starty - 59), + (startx + 0.1, starty - 60), + (startx, starty - 61), + (startx - 0.1, starty - 62), + (startx - 0.2, starty - 63), + (startx - 0.3, starty - 64), + (startx - 0.4, starty - 65), + (startx - 0.5, starty - 66), + (startx - 0.5, starty - 100), + ] self.updated_trajectory(self.current_trajectory) def updated_trajectory(self, target_trajectory): @@ -347,21 +343,21 @@ def loop(timer_event=None): depending on the selected TEST_TYPE """ # Drive const. velocity on fixed straight steering - if (TEST_TYPE == 0): + if TEST_TYPE == 0: self.driveVel = TARGET_VELOCITY_1 self.pure_pursuit_steer_pub.publish(FIXED_STEERING) self.velocity_pub.publish(self.driveVel) # Drive alternating velocities on fixed straight steering - elif (TEST_TYPE == 1): + elif TEST_TYPE == 1: if not self.time_set: self.drive_Vel = TARGET_VELOCITY_1 self.switch_checkpoint_time = rospy.get_time() self.switch_time_set = True - if (self.switch_checkpoint_time < rospy.get_time() - 10): + if self.switch_checkpoint_time < rospy.get_time() - 10: self.switch_checkpoint_time = rospy.get_time() self.switchVelocity = not self.switchVelocity - if (self.switchVelocity): + if self.switchVelocity: self.driveVel = TARGET_VELOCITY_2 else: self.driveVel = TARGET_VELOCITY_1 @@ -369,7 +365,7 @@ def loop(timer_event=None): self.velocity_pub.publish(self.driveVel) # drive const. velocity on trajectoy with steering controller - elif (TEST_TYPE == 2): + elif TEST_TYPE == 2: # Continuously update path and publish it self.drive_Vel = TARGET_VELOCITY_1 self.updated_trajectory(self.current_trajectory) @@ -378,13 +374,13 @@ def loop(timer_event=None): # drive const. velocity on fixed straight steering and # trigger an emergency brake after 15 secs - elif (TEST_TYPE == 3): + elif TEST_TYPE == 3: # Continuously update path and publish it self.drive_Vel = TARGET_VELOCITY_1 if not self.time_set: self.checkpoint_time = rospy.get_time() self.time_set = True - if (self.checkpoint_time < rospy.get_time() - 15.0): + if self.checkpoint_time < rospy.get_time() - 15.0: self.checkpoint_time = rospy.get_time() self.emergency_pub.publish(True) self.pure_pursuit_steer_pub.publish(FIXED_STEERING) @@ -402,7 +398,7 @@ def loop(timer_event=None): print(">>>>>>>>>>>> TRAJECTORY <<<<<<<<<<<<<<") # Uncomment the prints of the data you want to plot - if (self.checkpoint_time < rospy.get_time() - PRINT_AFTER_TIME): + if self.checkpoint_time < rospy.get_time() - PRINT_AFTER_TIME: self.checkpoint_time = rospy.get_time() print(">>>>>>>>>>>> DATA <<<<<<<<<<<<<<") if PRINT_VELOCITY_DATA: @@ -420,6 +416,7 @@ def loop(timer_event=None): print(">> ACTUAL POSITIONS <<") print(self.positions) print(">>>>>>>>>>>> DATA <<<<<<<<<<<<<<") + self.new_timer(self.control_loop_rate, loop) self.spin() diff --git a/code/acting/src/acting/MainFramePublisher.py b/code/acting/src/acting/MainFramePublisher.py index 0ba4783d..a34240e8 100755 --- a/code/acting/src/acting/MainFramePublisher.py +++ b/code/acting/src/acting/MainFramePublisher.py @@ -19,11 +19,11 @@ def __init__(self): ego vehicle does. The hero frame is used by sensors like the lidar. Rviz also uses the hero frame. The main frame is used for planning. """ - super(MainFramePublisher, self).__init__('main_frame_publisher') - self.loginfo('MainFramePublisher node started') + super(MainFramePublisher, self).__init__("main_frame_publisher") + self.loginfo("MainFramePublisher node started") - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) + self.role_name = self.get_param("role_name", "ego_vehicle") self.current_pos: PoseStamped = PoseStamped() self.current_heading: float = 0 @@ -31,16 +31,18 @@ def __init__(self): PoseStamped, "/paf/" + self.role_name + "/current_pos", self.get_current_pos, - qos_profile=1) + qos_profile=1, + ) self.current_heading_subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.get_current_heading, - qos_profile=1) + qos_profile=1, + ) def run(self): - self.loginfo('MainFramePublisher node running') + self.loginfo("MainFramePublisher node running") br = tf.TransformBroadcaster() def loop(timer_event=None): @@ -49,22 +51,26 @@ def loop(timer_event=None): return rot = -self.current_heading pos = [0, 0, 0] - pos[0] = cos(rot) * \ - self.current_pos.pose.position.x - \ - sin(rot) * self.current_pos.pose.position.y - pos[1] = sin(rot) * \ - self.current_pos.pose.position.x + \ - cos(rot) * self.current_pos.pose.position.y + pos[0] = ( + cos(rot) * self.current_pos.pose.position.x + - sin(rot) * self.current_pos.pose.position.y + ) + pos[1] = ( + sin(rot) * self.current_pos.pose.position.x + + cos(rot) * self.current_pos.pose.position.y + ) pos[2] = -self.current_pos.pose.position.z - rot_quat = R.from_euler("xyz", [0, 0, -self.current_heading+pi], - degrees=False).as_quat() - - br.sendTransform(pos, - rot_quat, - rospy.Time.now(), - "global", - "hero", - ) + rot_quat = R.from_euler( + "xyz", [0, 0, -self.current_heading + pi], degrees=False + ).as_quat() + + br.sendTransform( + pos, + rot_quat, + rospy.Time.now(), + "global", + "hero", + ) self.new_timer(self.control_loop_rate, loop) self.spin() @@ -81,7 +87,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('main_frame_publisher', args=args) + roscomp.init("main_frame_publisher", args=args) try: node = MainFramePublisher() @@ -92,5 +98,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/acting/src/acting/helper_functions.py b/code/acting/src/acting/helper_functions.py index 072e9aa2..8dc2c6d8 100755 --- a/code/acting/src/acting/helper_functions.py +++ b/code/acting/src/acting/helper_functions.py @@ -106,10 +106,10 @@ def calc_path_yaw(path: Path, idx: int) -> float: point_current = path.poses[idx] point_next: PoseStamped point_next = path.poses[idx + 1] - angle = math.atan2(point_next.pose.position.y - - point_current.pose.position.y, - point_next.pose.position.x - - point_current.pose.position.x) + angle = math.atan2( + point_next.pose.position.y - point_current.pose.position.y, + point_next.pose.position.x - point_current.pose.position.x, + ) return normalize_angle(angle) @@ -134,14 +134,19 @@ def calc_egocar_yaw(pose: PoseStamped) -> float: :param pose: The current pose of the ego vehicle :return: normalized yaw of the vehicle """ - quaternion = (pose.pose.orientation.x, pose.pose.orientation.y, - pose.pose.orientation.z, pose.pose.orientation.w) + quaternion = ( + pose.pose.orientation.x, + pose.pose.orientation.y, + pose.pose.orientation.z, + pose.pose.orientation.w, + ) _, _, yaw = euler_from_quaternion(quaternion) return normalize_angle(yaw) -def points_to_vector(p_1: Tuple[float, float], - p_2: Tuple[float, float]) -> Tuple[float, float]: +def points_to_vector( + p_1: Tuple[float, float], p_2: Tuple[float, float] +) -> Tuple[float, float]: """ Create the vector starting at p1 and ending at p2 :param p_1: Start point @@ -157,11 +162,12 @@ def vector_len(vec: Tuple[float, float]) -> float: :param vec: vector v as a tuple (x, y) :return: length of vector v """ - return sqrt(vec[0]**2 + vec[1]**2) + return sqrt(vec[0] ** 2 + vec[1] ** 2) -def add_vector(v_1: Tuple[float, float], - v_2: Tuple[float, float]) -> Tuple[float, float]: +def add_vector( + v_1: Tuple[float, float], v_2: Tuple[float, float] +) -> Tuple[float, float]: """ Add the two given vectors :param v_1: first vector @@ -172,20 +178,22 @@ def add_vector(v_1: Tuple[float, float], return v_1[0] + v_2[0], v_1[1] + v_2[1] -def rotate_vector(vector: Tuple[float, float], - angle_rad: float) -> Tuple[float, float]: +def rotate_vector(vector: Tuple[float, float], angle_rad: float) -> Tuple[float, float]: """ Rotate the given vector by an angle :param vector: vector :param angle_rad: angle of rotation :return: rotated angle """ - return (cos(angle_rad) * vector[0] - sin(angle_rad) * vector[1], - sin(angle_rad) * vector[0] + cos(angle_rad) * vector[1]) + return ( + cos(angle_rad) * vector[0] - sin(angle_rad) * vector[1], + sin(angle_rad) * vector[0] + cos(angle_rad) * vector[1], + ) -def linear_interpolation(start: Tuple[float, float], end: Tuple[float, float], - interval_m: float) -> List[Tuple[float, float]]: +def linear_interpolation( + start: Tuple[float, float], end: Tuple[float, float], interval_m: float +) -> List[Tuple[float, float]]: """ Interpolate linearly between start and end, with a minimal distance of interval_m between points. @@ -200,21 +208,23 @@ def linear_interpolation(start: Tuple[float, float], end: Tuple[float, float], steps = max(1, floor(distance / interval_m)) exceeds_interval_cap = distance > interval_m - step_vector = (vector[0] / steps if exceeds_interval_cap else vector[0], - vector[1] / steps if exceeds_interval_cap else vector[1]) + step_vector = ( + vector[0] / steps if exceeds_interval_cap else vector[0], + vector[1] / steps if exceeds_interval_cap else vector[1], + ) lin_points = [(start[0], start[1])] for i in range(1, steps): lin_points.append( - (start[0] + step_vector[0] * i, - start[1] + step_vector[1] * i) + (start[0] + step_vector[0] * i, start[1] + step_vector[1] * i) ) return lin_points -def _clean_route_duplicates(route: List[Tuple[float, float]], - min_dist: float) -> List[Tuple[float, float]]: +def _clean_route_duplicates( + route: List[Tuple[float, float]], min_dist: float +) -> List[Tuple[float, float]]: """ Remove duplicates in the given List of tuples, if the distance between them is less than min_dist. @@ -243,8 +253,9 @@ def interpolate_route(orig_route: List[Tuple[float, float]], interval_m=0.5): orig_route = _clean_route_duplicates(orig_route, 0.1) route = [] for index in range(len(orig_route) - 1): - waypoints = linear_interpolation(orig_route[index], - orig_route[index + 1], interval_m) + waypoints = linear_interpolation( + orig_route[index], orig_route[index + 1], interval_m + ) route.extend(waypoints) route = route + [orig_route[-1]] diff --git a/code/acting/src/acting/pure_pursuit_controller.py b/code/acting/src/acting/pure_pursuit_controller.py index 04740418..4259fe78 100755 --- a/code/acting/src/acting/pure_pursuit_controller.py +++ b/code/acting/src/acting/pure_pursuit_controller.py @@ -26,45 +26,44 @@ class PurePursuitController(CompatibleNode): def __init__(self): - super(PurePursuitController, self).__init__('pure_pursuit_controller') - self.loginfo('PurePursuitController node started') + super(PurePursuitController, self).__init__("pure_pursuit_controller") + self.loginfo("PurePursuitController node started") - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) + self.role_name = self.get_param("role_name", "ego_vehicle") self.position_sub: Subscriber = self.new_subscription( - Path, - f"/paf/{self.role_name}/trajectory", - self.__set_path, - qos_profile=1) + Path, f"/paf/{self.role_name}/trajectory", self.__set_path, qos_profile=1 + ) self.path_sub: Subscriber = self.new_subscription( PoseStamped, f"/paf/{self.role_name}/current_pos", self.__set_position, - qos_profile=1) + qos_profile=1, + ) self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__set_velocity, - qos_profile=1) + qos_profile=1, + ) self.heading_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.__set_heading, - qos_profile=1) + qos_profile=1, + ) self.pure_pursuit_steer_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/pure_pursuit_steer", - qos_profile=1) + Float32, f"/paf/{self.role_name}/pure_pursuit_steer", qos_profile=1 + ) self.debug_msg_pub: Publisher = self.new_publisher( - Debug, - f"/paf/{self.role_name}/pure_p_debug", - qos_profile=1) + Debug, f"/paf/{self.role_name}/pure_p_debug", qos_profile=1 + ) self.__position: tuple[float, float] = None # x, y self.__path: Path = None @@ -77,7 +76,7 @@ def run(self): Starts the main loop of the node :return: """ - self.loginfo('PurePursuitController node running') + self.loginfo("PurePursuitController node running") def loop(timer_event=None): """ @@ -86,26 +85,34 @@ def loop(timer_event=None): :return: """ if self.__path is None: - self.logdebug("PurePursuitController hasn't received a path " - "yet and can therefore not publish steering") + self.logdebug( + "PurePursuitController hasn't received a path " + "yet and can therefore not publish steering" + ) return if self.__position is None: - self.logdebug("PurePursuitController hasn't received the " - "position of the vehicle yet " - "and can therefore not publish steering") + self.logdebug( + "PurePursuitController hasn't received the " + "position of the vehicle yet " + "and can therefore not publish steering" + ) return if self.__heading is None: - self.logdebug("PurePursuitController hasn't received the " - "heading of the vehicle yet and " - "can therefore not publish steering") + self.logdebug( + "PurePursuitController hasn't received the " + "heading of the vehicle yet and " + "can therefore not publish steering" + ) return if self.__velocity is None: - self.logdebug("PurePursuitController hasn't received the " - "velocity of the vehicle yet " - "and can therefore not publish steering") + self.logdebug( + "PurePursuitController hasn't received the " + "velocity of the vehicle yet " + "and can therefore not publish steering" + ) return self.pure_pursuit_steer_pub.publish(self.__calculate_steer()) @@ -119,16 +126,17 @@ def __calculate_steer(self) -> float: :return: """ # la_dist = MIN_LA_DISTANCE <= K_LAD * velocity <= MAX_LA_DISTANCE - look_ahead_dist = np.clip(K_LAD * self.__velocity, - MIN_LA_DISTANCE, MAX_LA_DISTANCE) + look_ahead_dist = np.clip( + K_LAD * self.__velocity, MIN_LA_DISTANCE, MAX_LA_DISTANCE + ) # Get the target position on the trajectory in look_ahead distance self.__tp_idx = self.__get_target_point_index(look_ahead_dist) target_wp: PoseStamped = self.__path.poses[self.__tp_idx] # Get the vector from the current position to the target position - target_v_x, target_v_y = points_to_vector((self.__position[0], - self.__position[1]), - (target_wp.pose.position.x, - target_wp.pose.position.y)) + target_v_x, target_v_y = points_to_vector( + (self.__position[0], self.__position[1]), + (target_wp.pose.position.x, target_wp.pose.position.y), + ) # Get the target heading from that vector target_vector_heading = vector_angle(target_v_x, target_v_y) # Get the error between current heading and target heading @@ -181,7 +189,7 @@ def __dist_to(self, pos: Point) -> float: y_current = self.__position[1] x_target = pos.x y_target = pos.y - d = (x_target - x_current)**2 + (y_target - y_current)**2 + d = (x_target - x_current) ** 2 + (y_target - y_current) ** 2 return math.sqrt(d) def __set_position(self, data: PoseStamped, min_diff=0.001): @@ -206,9 +214,11 @@ def __set_position(self, data: PoseStamped, min_diff=0.001): # if new position is to close to current, do not accept it # too close = closer than min_diff = 0.001 meters # for debugging purposes: - self.logdebug("New position disregarded, " - f"as dist ({round(dist, 3)}) to current pos " - f"< min_diff ({round(min_diff, 3)})") + self.logdebug( + "New position disregarded, " + f"as dist ({round(dist, 3)}) to current pos " + f"< min_diff ({round(min_diff, 3)})" + ) return new_x = data.pose.position.x new_y = data.pose.position.y @@ -230,10 +240,10 @@ def __set_velocity(self, data: CarlaSpeedometer): def main(args=None): """ - main function starts the pure pursuit controller node - :param args: + main function starts the pure pursuit controller node + :param args: """ - roscomp.init('pure_pursuit_controller', args=args) + roscomp.init("pure_pursuit_controller", args=args) try: node = PurePursuitController() @@ -244,5 +254,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/acting/src/acting/stanley_controller.py b/code/acting/src/acting/stanley_controller.py index e0cbc190..3463e90e 100755 --- a/code/acting/src/acting/stanley_controller.py +++ b/code/acting/src/acting/stanley_controller.py @@ -20,46 +20,45 @@ class StanleyController(CompatibleNode): def __init__(self): - super(StanleyController, self).__init__('stanley_controller') - self.loginfo('StanleyController node started') + super(StanleyController, self).__init__("stanley_controller") + self.loginfo("StanleyController node started") - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) + self.role_name = self.get_param("role_name", "ego_vehicle") # Subscribers self.position_sub: Subscriber = self.new_subscription( - Path, - f"/paf/{self.role_name}/trajectory", - self.__set_path, - qos_profile=1) + Path, f"/paf/{self.role_name}/trajectory", self.__set_path, qos_profile=1 + ) self.path_sub: Subscriber = self.new_subscription( PoseStamped, f"/paf/{self.role_name}/current_pos", self.__set_position, - qos_profile=1) + qos_profile=1, + ) self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__set_velocity, - qos_profile=1) + qos_profile=1, + ) self.heading_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.__set_heading, - qos_profile=1) + qos_profile=1, + ) self.stanley_steer_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/stanley_steer", - qos_profile=1) + Float32, f"/paf/{self.role_name}/stanley_steer", qos_profile=1 + ) self.debug_publisher: Publisher = self.new_publisher( - StanleyDebug, - f"/paf/{self.role_name}/stanley_debug", - qos_profile=1) + StanleyDebug, f"/paf/{self.role_name}/stanley_debug", qos_profile=1 + ) self.__position: tuple[float, float] = None # x , y self.__path: Path = None @@ -71,7 +70,7 @@ def run(self): Starts the main loop of the node :return: """ - self.loginfo('StanleyController node running') + self.loginfo("StanleyController node running") def loop(timer_event=None): """ @@ -80,25 +79,33 @@ def loop(timer_event=None): :return: """ if self.__path is None: - self.logwarn("StanleyController hasn't received a path yet " - "and can therefore not publish steering") + self.logwarn( + "StanleyController hasn't received a path yet " + "and can therefore not publish steering" + ) return if self.__position is None: - self.logwarn("StanleyController hasn't received the" - "position of the vehicle yet " - "and can therefore not publish steering") + self.logwarn( + "StanleyController hasn't received the" + "position of the vehicle yet " + "and can therefore not publish steering" + ) return if self.__heading is None: - self.logwarn("StanleyController hasn't received the" - "heading of the vehicle yet and" - "can therefore not publish steering") + self.logwarn( + "StanleyController hasn't received the" + "heading of the vehicle yet and" + "can therefore not publish steering" + ) return if self.__velocity is None: - self.logwarn("StanleyController hasn't received the " - "velocity of the vehicle yet " - "and can therefore not publish steering") + self.logwarn( + "StanleyController hasn't received the " + "velocity of the vehicle yet " + "and can therefore not publish steering" + ) return self.stanley_steer_pub.publish(self.__calculate_steer()) @@ -125,8 +132,9 @@ def __calculate_steer(self) -> float: closest_point: PoseStamped = self.__path.poses[closest_point_idx] cross_err = self.__get_cross_err(closest_point.pose.position) # * -1 because it is inverted compared to PurePursuit - steering_angle = 1 * (heading_err + atan((K_CROSSERR * cross_err) - / current_velocity)) + steering_angle = 1 * ( + heading_err + atan((K_CROSSERR * cross_err) / current_velocity) + ) # -> for debugging debug_msg = StanleyDebug() debug_msg.heading = self.__heading @@ -174,10 +182,11 @@ def __get_path_heading(self, index: int) -> float: if index > 0: # Calculate heading from the previous point on the trajectory - prv_point: Point = self.__path.poses[index-1].pose.position + prv_point: Point = self.__path.poses[index - 1].pose.position - prv_v_x, prv_v_y = points_to_vector((prv_point.x, prv_point.y), - (cur_pos.x, cur_pos.y)) + prv_v_x, prv_v_y = points_to_vector( + (prv_point.x, prv_point.y), (cur_pos.x, cur_pos.y) + ) heading_sum += vector_angle(prv_v_x, prv_v_y) heading_sum_args += 1 @@ -186,8 +195,9 @@ def __get_path_heading(self, index: int) -> float: # Calculate heading to the following point on the trajectory aft_point: Point = self.__path.poses[index + 1].pose.position - aft_v_x, aft_v_y = points_to_vector((aft_point.x, aft_point.y), - (cur_pos.x, cur_pos.y)) + aft_v_x, aft_v_y = points_to_vector( + (aft_point.x, aft_point.y), (cur_pos.x, cur_pos.y) + ) heading_sum += vector_angle(aft_v_x, aft_v_y) heading_sum_args += 1 @@ -210,8 +220,10 @@ def __get_cross_err(self, pos: Point) -> float: if self.__heading is not None: alpha = self.__heading + (math.pi / 2) v_e_0 = (0, 1) - v_e = (cos(alpha)*v_e_0[0] - sin(alpha)*v_e_0[1], - sin(alpha)*v_e_0[0] + cos(alpha)*v_e_0[1]) + v_e = ( + cos(alpha) * v_e_0[0] - sin(alpha) * v_e_0[1], + sin(alpha) * v_e_0[0] + cos(alpha) * v_e_0[1], + ) # define a vector (v_ab) with length 10 centered on the cur pos # of the vehicle, with a heading parallel to that of the vehicle @@ -221,8 +233,7 @@ def __get_cross_err(self, pos: Point) -> float: v_ab = (b[0] - a[0], b[1] - a[1]) v_am = (pos.x - a[0], pos.y - a[1]) - c = np.array([[v_ab[0], v_am[0]], - [v_ab[1], v_am[1]]]) + c = np.array([[v_ab[0], v_am[0]], [v_ab[1], v_am[1]]]) temp_sign = np.linalg.det(c) min_sign = 0.01 # to avoid rounding errors @@ -268,9 +279,11 @@ def __set_position(self, data: PoseStamped, min_diff=0.001): # check if the new position is valid dist = self.__dist_to(data.pose.position) if dist < min_diff: - self.logdebug("New position disregarded, " - f"as dist ({round(dist, 3)}) to current pos " - f"< min_diff ({round(min_diff, 3)})") + self.logdebug( + "New position disregarded, " + f"as dist ({round(dist, 3)}) to current pos " + f"< min_diff ({round(min_diff, 3)})" + ) return new_x = data.pose.position.x @@ -296,7 +309,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('stanley_controller', args=args) + roscomp.init("stanley_controller", args=args) try: node = StanleyController() @@ -307,5 +320,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/acting/src/acting/vehicle_controller.py b/code/acting/src/acting/vehicle_controller.py index e97bc1c8..90aa0f68 100755 --- a/code/acting/src/acting/vehicle_controller.py +++ b/code/acting/src/acting/vehicle_controller.py @@ -21,41 +21,45 @@ class VehicleController(CompatibleNode): """ def __init__(self): - super(VehicleController, self).__init__('vehicle_controller') - self.loginfo('VehicleController node started') - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) - self.role_name = self.get_param('role_name', 'ego_vehicle') + super(VehicleController, self).__init__("vehicle_controller") + self.loginfo("VehicleController node started") + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) + self.role_name = self.get_param("role_name", "ego_vehicle") self.__curr_behavior = None # only unstuck behavior is relevant here # Publisher for Carla Vehicle Control Commands self.control_publisher: Publisher = self.new_publisher( CarlaEgoVehicleControl, - f'/carla/{self.role_name}/vehicle_control_cmd', - qos_profile=10) + f"/carla/{self.role_name}/vehicle_control_cmd", + qos_profile=10, + ) # Publisher for Status TODO: Maybe unneccessary self.status_pub: Publisher = self.new_publisher( Bool, f"/carla/{self.role_name}/status", qos_profile=QoSProfile( - depth=1, - durability=DurabilityPolicy.TRANSIENT_LOCAL)) + depth=1, durability=DurabilityPolicy.TRANSIENT_LOCAL + ), + ) # Publisher for which steering-controller is mainly used # 1 = PurePursuit and 2 = Stanley self.controller_pub: Publisher = self.new_publisher( Float32, f"/paf/{self.role_name}/controller", - qos_profile=QoSProfile(depth=10, - durability=DurabilityPolicy.TRANSIENT_LOCAL) + qos_profile=QoSProfile( + depth=10, durability=DurabilityPolicy.TRANSIENT_LOCAL + ), ) self.emergency_pub: Publisher = self.new_publisher( Bool, f"/paf/{self.role_name}/emergency", - qos_profile=QoSProfile(depth=10, - durability=DurabilityPolicy.TRANSIENT_LOCAL) + qos_profile=QoSProfile( + depth=10, durability=DurabilityPolicy.TRANSIENT_LOCAL + ), ) # Subscribers @@ -63,51 +67,53 @@ def __init__(self): String, f"/paf/{self.role_name}/curr_behavior", self.__set_curr_behavior, - qos_profile=1) + qos_profile=1, + ) self.emergency_sub: Subscriber = self.new_subscription( Bool, f"/paf/{self.role_name}/emergency", self.__set_emergency, - qos_profile=QoSProfile(depth=10, - durability=DurabilityPolicy.TRANSIENT_LOCAL) + qos_profile=QoSProfile( + depth=10, durability=DurabilityPolicy.TRANSIENT_LOCAL + ), ) self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_velocity, - qos_profile=1) + qos_profile=1, + ) self.throttle_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/throttle", self.__set_throttle, - qos_profile=1) + qos_profile=1, + ) self.brake_sub: Subscriber = self.new_subscription( - Float32, - f"/paf/{self.role_name}/brake", - self.__set_brake, - qos_profile=1) + Float32, f"/paf/{self.role_name}/brake", self.__set_brake, qos_profile=1 + ) self.reverse_sub: Subscriber = self.new_subscription( - Bool, - f"/paf/{self.role_name}/reverse", - self.__set_reverse, - qos_profile=1) + Bool, f"/paf/{self.role_name}/reverse", self.__set_reverse, qos_profile=1 + ) self.pure_pursuit_steer_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/pure_pursuit_steer", self.__set_pure_pursuit_steer, - qos_profile=1) + qos_profile=1, + ) self.stanley_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/stanley_steer", self.__set_stanley_steer, - qos_profile=1) + qos_profile=1, + ) self.__reverse: bool = False self.__emergency: bool = False @@ -123,7 +129,7 @@ def run(self): :return: """ self.status_pub.publish(True) - self.loginfo('VehicleController node running') + self.loginfo("VehicleController node running") def loop(timer_event=None) -> None: """ @@ -143,8 +149,10 @@ def loop(timer_event=None) -> None: steer = self._s_steer else: # while doing the unstuck routine we don't want to steer - if self.__curr_behavior == "us_unstuck" or \ - self.__curr_behavior == "us_stop": + if ( + self.__curr_behavior == "us_unstuck" + or self.__curr_behavior == "us_stop" + ): steer = 0 else: steer = self._p_steer @@ -157,8 +165,7 @@ def loop(timer_event=None) -> None: message.throttle = self.__throttle message.brake = self.__brake message.steer = steer - message.header.stamp = roscomp.ros_timestamp(self.get_time(), - from_sec=True) + message.header.stamp = roscomp.ros_timestamp(self.get_time(), from_sec=True) self.control_publisher.publish(message) self.new_timer(self.control_loop_rate, loop) @@ -204,8 +211,7 @@ def __emergency_brake(self, active) -> None: message.reverse = True message.hand_brake = True message.manual_gear_shift = False - message.header.stamp = roscomp.ros_timestamp(self.get_time(), - from_sec=True) + message.header.stamp = roscomp.ros_timestamp(self.get_time(), from_sec=True) else: self.__emergency = False message.throttle = 0 @@ -214,8 +220,7 @@ def __emergency_brake(self, active) -> None: message.reverse = False message.hand_brake = False message.manual_gear_shift = False - message.header.stamp = roscomp.ros_timestamp(self.get_time(), - from_sec=True) + message.header.stamp = roscomp.ros_timestamp(self.get_time(), from_sec=True) self.control_publisher.publish(message) def __get_velocity(self, data: CarlaSpeedometer) -> None: @@ -230,11 +235,12 @@ def __get_velocity(self, data: CarlaSpeedometer) -> None: return if data.speed < 0.1: # vehicle has come to a stop self.__emergency_brake(False) - self.loginfo("Emergency breaking disengaged " - "(Emergency breaking has been executed successfully)") + self.loginfo( + "Emergency breaking disengaged " + "(Emergency breaking has been executed successfully)" + ) for _ in range(7): # publish 7 times just to be safe - self.emergency_pub.publish( - Bool(False)) + self.emergency_pub.publish(Bool(False)) def __set_throttle(self, data): self.__throttle = data.data @@ -246,12 +252,12 @@ def __set_reverse(self, data): self.__reverse = data.data def __set_pure_pursuit_steer(self, data: Float32): - r = (math.pi / 2) # convert from RAD to [-1;1] - self._p_steer = (data.data / r) + r = math.pi / 2 # convert from RAD to [-1;1] + self._p_steer = data.data / r def __set_stanley_steer(self, data: Float32): - r = (math.pi / 2) # convert from RAD to [-1;1] - self._s_steer = (data.data / r) + r = math.pi / 2 # convert from RAD to [-1;1] + self._s_steer = data.data / r def main(args=None): @@ -259,7 +265,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('vehicle_controller', args=args) + roscomp.init("vehicle_controller", args=args) try: node = VehicleController() @@ -270,5 +276,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/acting/src/acting/velocity_controller.py b/code/acting/src/acting/velocity_controller.py index bf43ab41..db1f53aa 100755 --- a/code/acting/src/acting/velocity_controller.py +++ b/code/acting/src/acting/velocity_controller.py @@ -15,38 +15,37 @@ class VelocityController(CompatibleNode): """ def __init__(self): - super(VelocityController, self).__init__('velocity_controller') - self.loginfo('VelocityController node started') + super(VelocityController, self).__init__("velocity_controller") + self.loginfo("VelocityController node started") - self.control_loop_rate = self.get_param('control_loop_rate', 0.05) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 0.05) + self.role_name = self.get_param("role_name", "ego_vehicle") self.target_velocity_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/target_velocity", self.__get_target_velocity, - qos_profile=1) + qos_profile=1, + ) self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_current_velocity, - qos_profile=1) + qos_profile=1, + ) self.throttle_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/throttle", - qos_profile=1) + Float32, f"/paf/{self.role_name}/throttle", qos_profile=1 + ) self.brake_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/brake", - qos_profile=1) + Float32, f"/paf/{self.role_name}/brake", qos_profile=1 + ) self.reverse_pub: Publisher = self.new_publisher( - Bool, - f"/paf/{self.role_name}/reverse", - qos_profile=1) + Bool, f"/paf/{self.role_name}/reverse", qos_profile=1 + ) self.__current_velocity: float = None self.__target_velocity: float = None @@ -56,7 +55,7 @@ def run(self): Starts the main loop of the node :return: """ - self.loginfo('VelocityController node running') + self.loginfo("VelocityController node running") # PID for throttle pid_t = PID(0.60, 0.00076, 0.63) # since we use this for braking aswell, allow -1 to 0. @@ -71,15 +70,19 @@ def loop(timer_event=None): :return: """ if self.__target_velocity is None: - self.logdebug("VelocityController hasn't received target" - "_velocity yet. target_velocity has been set to" - "default value 0") + self.logdebug( + "VelocityController hasn't received target" + "_velocity yet. target_velocity has been set to" + "default value 0" + ) self.__target_velocity = 0 if self.__current_velocity is None: - self.logdebug("VelocityController hasn't received " - "current_velocity yet and can therefore not" - "publish a throttle value") + self.logdebug( + "VelocityController hasn't received " + "current_velocity yet and can therefore not" + "publish a throttle value" + ) return if self.__target_velocity < 0: @@ -135,7 +138,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('velocity_controller', args=args) + roscomp.init("velocity_controller", args=args) try: node = VelocityController() @@ -146,5 +149,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/agent/setup.py b/code/agent/setup.py index ad7850df..0b6d7399 100644 --- a/code/agent/setup.py +++ b/code/agent/setup.py @@ -2,6 +2,5 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup -setup_args = generate_distutils_setup(packages=['agent'], - package_dir={'': 'src'}) +setup_args = generate_distutils_setup(packages=["agent"], package_dir={"": "src"}) setup(**setup_args) diff --git a/code/agent/src/agent/agent.py b/code/agent/src/agent/agent.py index 5a97786b..a57f2d6a 100755 --- a/code/agent/src/agent/agent.py +++ b/code/agent/src/agent/agent.py @@ -4,7 +4,7 @@ def get_entry_point(): - return 'PAFAgent' + return "PAFAgent" class PAFAgent(ROS1Agent): @@ -14,76 +14,102 @@ def setup(self, path_to_conf_file): def get_ros_entrypoint(self): return { - 'package': 'agent', - 'launch_file': 'agent.launch', - 'parameters': { - 'role_name': 'hero', - } + "package": "agent", + "launch_file": "agent.launch", + "parameters": { + "role_name": "hero", + }, } def sensors(self): sensors = [ { - 'type': 'sensor.camera.rgb', - 'id': 'Center', - 'x': 0.0, 'y': 0.0, 'z': 1.70, - 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0, - 'width': 1280, 'height': 720, 'fov': 100 - }, + "type": "sensor.camera.rgb", + "id": "Center", + "x": 0.0, + "y": 0.0, + "z": 1.70, + "roll": 0.0, + "pitch": 0.0, + "yaw": 0.0, + "width": 1280, + "height": 720, + "fov": 100, + }, { - 'type': 'sensor.camera.rgb', - 'id': 'Back', - 'x': 0.0, 'y': 0.0, 'z': 1.70, - 'roll': 0.0, 'pitch': 0.0, 'yaw': math.radians(180.0), - 'width': 1280, 'height': 720, 'fov': 100 - }, + "type": "sensor.camera.rgb", + "id": "Back", + "x": 0.0, + "y": 0.0, + "z": 1.70, + "roll": 0.0, + "pitch": 0.0, + "yaw": math.radians(180.0), + "width": 1280, + "height": 720, + "fov": 100, + }, { - 'type': 'sensor.camera.rgb', - 'id': 'Left', - 'x': 0.0, 'y': 0.0, 'z': 1.70, - 'roll': 0.0, 'pitch': 0.0, 'yaw': math.radians(-90.0), - 'width': 1280, 'height': 720, 'fov': 100 - }, + "type": "sensor.camera.rgb", + "id": "Left", + "x": 0.0, + "y": 0.0, + "z": 1.70, + "roll": 0.0, + "pitch": 0.0, + "yaw": math.radians(-90.0), + "width": 1280, + "height": 720, + "fov": 100, + }, { - 'type': 'sensor.camera.rgb', - 'id': 'Right', - 'x': 0.0, 'y': 0.0, 'z': 1.70, - 'roll': 0.0, 'pitch': 0.0, 'yaw': math.radians(90.0), - 'width': 1280, 'height': 720, 'fov': 100 - }, + "type": "sensor.camera.rgb", + "id": "Right", + "x": 0.0, + "y": 0.0, + "z": 1.70, + "roll": 0.0, + "pitch": 0.0, + "yaw": math.radians(90.0), + "width": 1280, + "height": 720, + "fov": 100, + }, { - 'type': 'sensor.lidar.ray_cast', - 'id': 'LIDAR', - 'x': 0.0, 'y': 0.0, 'z': 1.70, - 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0 - }, + "type": "sensor.lidar.ray_cast", + "id": "LIDAR", + "x": 0.0, + "y": 0.0, + "z": 1.70, + "roll": 0.0, + "pitch": 0.0, + "yaw": 0.0, + }, { - 'type': 'sensor.other.radar', - 'id': 'RADAR', - 'x': 2.0, 'y': 0.0, 'z': 0.7, - 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0, - 'horizontal_fov': 30, 'vertical_fov': 30 - }, + "type": "sensor.other.radar", + "id": "RADAR", + "x": 2.0, + "y": 0.0, + "z": 0.7, + "roll": 0.0, + "pitch": 0.0, + "yaw": 0.0, + "horizontal_fov": 30, + "vertical_fov": 30, + }, + {"type": "sensor.other.gnss", "id": "GPS", "x": 0.0, "y": 0.0, "z": 0.0}, { - 'type': 'sensor.other.gnss', - 'id': 'GPS', - 'x': 0.0, 'y': 0.0, 'z': 0.0 - }, - { - 'type': 'sensor.other.imu', - 'id': 'IMU', - 'x': 0.0, 'y': 0.0, 'z': 0.0, - 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0 - }, - { - 'type': 'sensor.opendrive_map', - 'id': 'OpenDRIVE', - 'reading_frequency': 1 - }, - { - 'type': 'sensor.speedometer', - 'id': 'Speed' - } + "type": "sensor.other.imu", + "id": "IMU", + "x": 0.0, + "y": 0.0, + "z": 0.0, + "roll": 0.0, + "pitch": 0.0, + "yaw": 0.0, + }, + {"type": "sensor.opendrive_map", "id": "OpenDRIVE", "reading_frequency": 1}, + {"type": "sensor.speedometer", "id": "Speed"}, ] return sensors diff --git a/code/mock/setup.py b/code/mock/setup.py index 8f232a1f..bf698614 100755 --- a/code/mock/setup.py +++ b/code/mock/setup.py @@ -1,6 +1,5 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup -setup_args = generate_distutils_setup(packages=['mock'], - package_dir={'': 'src'}) +setup_args = generate_distutils_setup(packages=["mock"], package_dir={"": "src"}) setup(**setup_args) diff --git a/code/mock/src/mock_intersection_clear.py b/code/mock/src/mock_intersection_clear.py index df7cea62..bb584941 100755 --- a/code/mock/src/mock_intersection_clear.py +++ b/code/mock/src/mock_intersection_clear.py @@ -10,18 +10,17 @@ class MockIntersectionClearPublisher(CompatibleNode): This node publishes intersection clear information. It can be used for testing. """ + def __init__(self): - super(MockIntersectionClearPublisher, self).\ - __init__('intersectionClearMock') + super(MockIntersectionClearPublisher, self).__init__("intersectionClearMock") - self.control_loop_rate = self.get_param('control_loop_rate', 10) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 10) + self.role_name = self.get_param("role_name", "ego_vehicle") # self.enabled = self.get_param('enabled', False) self.stop_sign_pub: Publisher = self.new_publisher( - Bool, - f"/paf/{self.role_name}/intersection_clear", - qos_profile=1) + Bool, f"/paf/{self.role_name}/intersection_clear", qos_profile=1 + ) self.delta = 0.2 self.distance = 75.0 self.isClear = False @@ -33,7 +32,7 @@ def run(self): """ # if not self.enabled: # return - self.loginfo('Stopsignmock node running') + self.loginfo("Stopsignmock node running") def loop(timer_event=None): """ @@ -47,6 +46,7 @@ def loop(timer_event=None): if self.distance < 0.0: self.isClear = True self.stop_sign_pub.publish(msg) + self.new_timer(self.control_loop_rate, loop) self.spin() @@ -56,7 +56,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('velocity_publisher_dummy', args=args) + roscomp.init("velocity_publisher_dummy", args=args) try: node = MockIntersectionClearPublisher() @@ -67,5 +67,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/mock/src/mock_stop_sign.py b/code/mock/src/mock_stop_sign.py index a5ec4691..f85f6344 100755 --- a/code/mock/src/mock_stop_sign.py +++ b/code/mock/src/mock_stop_sign.py @@ -2,6 +2,7 @@ import ros_compatibility as roscomp from ros_compatibility.node import CompatibleNode from rospy import Publisher + # from std_msgs.msg import Float32 from mock.msg import Stop_sign @@ -11,18 +12,17 @@ class MockStopSignPublisher(CompatibleNode): This node publishes stop sign light information. It can be used for testing. """ + def __init__(self): - super(MockStopSignPublisher, self).\ - __init__('stopSignMock') + super(MockStopSignPublisher, self).__init__("stopSignMock") - self.control_loop_rate = self.get_param('control_loop_rate', 10) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 10) + self.role_name = self.get_param("role_name", "ego_vehicle") # self.enabled = self.get_param('enabled', False) self.stop_sign_pub: Publisher = self.new_publisher( - Stop_sign, - f"/paf/{self.role_name}/stop_sign", - qos_profile=1) + Stop_sign, f"/paf/{self.role_name}/stop_sign", qos_profile=1 + ) self.delta = 0.2 self.distance = 20.0 self.isStop = False @@ -34,7 +34,7 @@ def run(self): """ # if not self.enabled: # return - self.loginfo('Stopsignmock node running') + self.loginfo("Stopsignmock node running") def loop(timer_event=None): """ @@ -50,6 +50,7 @@ def loop(timer_event=None): self.distance = 20.0 msg.distance = self.distance self.stop_sign_pub.publish(msg) + self.new_timer(self.control_loop_rate, loop) self.spin() @@ -59,7 +60,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('velocity_publisher_dummy', args=args) + roscomp.init("velocity_publisher_dummy", args=args) try: node = MockStopSignPublisher() @@ -70,5 +71,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/mock/src/mock_traffic_light.py b/code/mock/src/mock_traffic_light.py index 13852e03..63ff289a 100755 --- a/code/mock/src/mock_traffic_light.py +++ b/code/mock/src/mock_traffic_light.py @@ -2,6 +2,7 @@ import ros_compatibility as roscomp from ros_compatibility.node import CompatibleNode from rospy import Publisher + # from std_msgs.msg import Float32 from mock.msg import Traffic_light @@ -10,18 +11,17 @@ class MockTrafficLightPublisher(CompatibleNode): """ This node publishes traffic light information. It can be used for testing. """ + def __init__(self): - super(MockTrafficLightPublisher, self).\ - __init__('trafficLightMock') + super(MockTrafficLightPublisher, self).__init__("trafficLightMock") - self.control_loop_rate = self.get_param('control_loop_rate', 10) - self.role_name = self.get_param('role_name', 'ego_vehicle') + self.control_loop_rate = self.get_param("control_loop_rate", 10) + self.role_name = self.get_param("role_name", "ego_vehicle") # self.enabled = self.get_param('enabled', False) self.traffic_light_pub: Publisher = self.new_publisher( - Traffic_light, - f"/paf/{self.role_name}/traffic_light", - qos_profile=1) + Traffic_light, f"/paf/{self.role_name}/traffic_light", qos_profile=1 + ) self.delta = 0.2 self.distance = 20.0 self.color = "green" @@ -33,7 +33,7 @@ def run(self): """ # if not self.enabled: # return - self.loginfo('TrafficLightmock node running') + self.loginfo("TrafficLightmock node running") def loop(timer_event=None): """ @@ -54,6 +54,7 @@ def loop(timer_event=None): self.distance = 20.0 msg.distance = self.distance self.traffic_light_pub.publish(msg) + self.new_timer(self.control_loop_rate, loop) self.spin() @@ -63,7 +64,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('traffic_light_publisher_dummy', args=args) + roscomp.init("traffic_light_publisher_dummy", args=args) try: node = MockTrafficLightPublisher() @@ -74,5 +75,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/perception/setup.py b/code/perception/setup.py index 13c6ef29..91c01e49 100644 --- a/code/perception/setup.py +++ b/code/perception/setup.py @@ -2,6 +2,5 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup -setup_args = generate_distutils_setup(packages=['perception'], - package_dir={'': 'src'}) +setup_args = generate_distutils_setup(packages=["perception"], package_dir={"": "src"}) setup(**setup_args) diff --git a/code/perception/src/coordinate_transformation.py b/code/perception/src/coordinate_transformation.py index 4f062770..b6bf33e0 100755 --- a/code/perception/src/coordinate_transformation.py +++ b/code/perception/src/coordinate_transformation.py @@ -7,9 +7,11 @@ A good source to read up on the different reference frames is: http://dirsig.cis.rit.edu/docs/new/coordinates.html """ + import math import numpy as np from scipy.spatial.transform import Rotation + # from tf.transformations import euler_from_quaternion @@ -55,12 +57,14 @@ def geodetic_to_enu(lat, lon, alt): scale = math.cos(CoordinateTransformer.la_ref * math.pi / 180.0) basex = scale * math.pi * a / 180.0 * CoordinateTransformer.ln_ref - basey = scale * a * math.log( - math.tan((90.0 + CoordinateTransformer.la_ref) * math.pi / 360.0)) + basey = ( + scale + * a + * math.log(math.tan((90.0 + CoordinateTransformer.la_ref) * math.pi / 360.0)) + ) x = scale * math.pi * a / 180.0 * lon - basex - y = scale * a * math.log( - math.tan((90.0 + lat) * math.pi / 360.0)) - basey + y = scale * a * math.log(math.tan((90.0 + lat) * math.pi / 360.0)) - basey # Is not necessary in new version # y *= -1 @@ -140,6 +144,7 @@ def quat_to_heading(quaternion): return heading + # old functions # def quat_to_heading(msg): # orientation_q = msg diff --git a/code/perception/src/dataset_converter.py b/code/perception/src/dataset_converter.py index 1122f981..9976878f 100755 --- a/code/perception/src/dataset_converter.py +++ b/code/perception/src/dataset_converter.py @@ -9,25 +9,21 @@ def create_argparse(): - argparser = ArgumentParser( - description='CARLA Dataset Converter') + argparser = ArgumentParser(description="CARLA Dataset Converter") + argparser.add_argument("input_dir", help="Path to the input directory") + argparser.add_argument("output_dir", help="Path to the output directory") argparser.add_argument( - 'input_dir', - help='Path to the input directory') - argparser.add_argument( - 'output_dir', - help='Path to the output directory') - argparser.add_argument( - '--force', + "--force", default=False, - action='store_true', - help='Overwrite output if already exists') + action="store_true", + help="Overwrite output if already exists", + ) argparser.add_argument( - '--shuffle', + "--shuffle", default=False, - action='store_true', - help='Shuffle the dataset before splitting it' - ' into train, test and validation sets' + action="store_true", + help="Shuffle the dataset before splitting it" + " into train, test and validation sets", ) return argparser @@ -64,35 +60,35 @@ def main(): output_dir.mkdir() else: raise ValueError( - f"given output_dir ({output_dir.as_posix()}) already exists!") + f"given output_dir ({output_dir.as_posix()}) already exists!" + ) if not input_dir.is_dir(): - raise ValueError( - f"input_dir ({input_dir.as_posix()}) needs to be a directory") + raise ValueError(f"input_dir ({input_dir.as_posix()}) needs to be a directory") # first create the necessary directories - groundtruth = output_dir / 'groundtruth' + groundtruth = output_dir / "groundtruth" groundtruth.mkdir(parents=True) rgb_files = {} instance_files = {} # populate dicts - for file in input_dir.rglob('*.png'): + for file in input_dir.rglob("*.png"): side = file.parts[-2] - if 'rgb' in file.parts: + if "rgb" in file.parts: add_to_side_list(rgb_files, side, file) - if 'instance' in file.parts: + if "instance" in file.parts: add_to_side_list(instance_files, side, file) # sort images according to their sequence number for side in rgb_files: - rgb_files[side] = sorted(rgb_files[side], - key=lambda path: int(path.stem)) - instance_files[side] = sorted(instance_files[side], - key=lambda path: int(path.stem)) + rgb_files[side] = sorted(rgb_files[side], key=lambda path: int(path.stem)) + instance_files[side] = sorted( + instance_files[side], key=lambda path: int(path.stem) + ) - print(f'rgb_files[{side}] length: {len(rgb_files[side])}') - print(f'instance_files[{side}] length: {len(instance_files[side])}') + print(f"rgb_files[{side}] length: {len(rgb_files[side])}") + print(f"instance_files[{side}] length: {len(instance_files[side])}") splits = [train, test, val] split_names = ["train", "test", "val"] @@ -128,9 +124,8 @@ def main(): # convert_image_to_cityscapes_labelids( # instance_image, # groundtruth_target_dir / instance_file_name) zzzyy - copyfile(instance_image, - groundtruth_target_dir / instance_file_name) + copyfile(instance_image, groundtruth_target_dir / instance_file_name) -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/perception/src/dataset_generator.py b/code/perception/src/dataset_generator.py index fd849d11..0a52c637 100755 --- a/code/perception/src/dataset_generator.py +++ b/code/perception/src/dataset_generator.py @@ -9,8 +9,8 @@ from threading import Thread # get carla host and port from environment variables -CARLA_HOST = os.environ.get('CARLA_HOST', 'localhost') -CARLA_PORT = int(os.environ.get('CARLA_PORT', '2000')) +CARLA_HOST = os.environ.get("CARLA_HOST", "localhost") +CARLA_PORT = int(os.environ.get("CARLA_PORT", "2000")) def destroy_actors(actors): @@ -25,13 +25,13 @@ def setup_empty_world(client): world.wait_for_tick() # destroy all actors - destroy_actors(world.get_actors().filter('vehicle.*')) - destroy_actors(world.get_actors().filter('walker.*')) - destroy_actors(world.get_actors().filter('controller.*')) + destroy_actors(world.get_actors().filter("vehicle.*")) + destroy_actors(world.get_actors().filter("walker.*")) + destroy_actors(world.get_actors().filter("controller.*")) # spawn ego vehicle blueprint_library = world.get_blueprint_library() - bp = blueprint_library.filter('vehicle.*')[0] + bp = blueprint_library.filter("vehicle.*")[0] ego_vehicle = world.spawn_actor(bp, world.get_map().get_spawn_points()[0]) ego_vehicle.set_autopilot(True) @@ -39,8 +39,10 @@ def setup_empty_world(client): spectator = world.get_spectator() # set spectator to follow ego vehicle with offset spectator.set_transform( - carla.Transform(ego_vehicle.get_location() + carla.Location(z=50), - carla.Rotation(pitch=-90))) + carla.Transform( + ego_vehicle.get_location() + carla.Location(z=50), carla.Rotation(pitch=-90) + ) + ) # create traffic manager traffic_manager = client.get_trafficmanager(8000) @@ -50,32 +52,30 @@ def setup_empty_world(client): blueprint_library = world.get_blueprint_library() count = 0 while count < 14: - bp = choice(blueprint_library.filter('walker.pedestrian.*')) + bp = choice(blueprint_library.filter("walker.pedestrian.*")) spawn_point = carla.Transform() spawn_point.location = world.get_random_location_from_navigation() - traffic_pedestrian = world.try_spawn_actor(bp, - spawn_point) + traffic_pedestrian = world.try_spawn_actor(bp, spawn_point) if traffic_pedestrian is None: continue - controller_bp = blueprint_library.find('controller.ai.walker') - ai_controller = world.try_spawn_actor(controller_bp, carla.Transform(), - traffic_pedestrian) + controller_bp = blueprint_library.find("controller.ai.walker") + ai_controller = world.try_spawn_actor( + controller_bp, carla.Transform(), traffic_pedestrian + ) ai_controller.start() - ai_controller.go_to_location( - world.get_random_location_from_navigation()) + ai_controller.go_to_location(world.get_random_location_from_navigation()) ai_controller.set_max_speed(1.0) count += 1 # spawn traffic vehicles for i in range(18): - bp = choice(blueprint_library.filter('vehicle.*')) - traffic_vehicle = world.spawn_actor(bp, - world.get_map().get_spawn_points()[ - i + 1]) - traffic_manager.vehicle_percentage_speed_difference(traffic_vehicle, - 0.0) + bp = choice(blueprint_library.filter("vehicle.*")) + traffic_vehicle = world.spawn_actor( + bp, world.get_map().get_spawn_points()[i + 1] + ) + traffic_manager.vehicle_percentage_speed_difference(traffic_vehicle, 0.0) traffic_vehicle.set_autopilot(True) return ego_vehicle @@ -94,13 +94,13 @@ def __init__(self, output_dir): "center": Queue(), "right": Queue(), "back": Queue(), - "left": Queue() + "left": Queue(), } self.instance_camera_queues = { "center": Queue(), "right": Queue(), "back": Queue(), - "left": Queue() + "left": Queue(), } def save_image(self, image, dir): @@ -112,39 +112,37 @@ def save_segmented_image(self, image, dir): def setup_cameras(self, world, ego_vehicle): # get instance segmentation camera blueprint instance_camera_bp = world.get_blueprint_library().find( - 'sensor.camera.instance_segmentation' + "sensor.camera.instance_segmentation" ) # get camera blueprint - camera_bp = world.get_blueprint_library().find('sensor.camera.rgb') - camera_bp.set_attribute('sensor_tick', '1.0') + camera_bp = world.get_blueprint_library().find("sensor.camera.rgb") + camera_bp.set_attribute("sensor_tick", "1.0") # set leaderboard attributes - camera_bp.set_attribute('lens_circle_multiplier', '0.0') - camera_bp.set_attribute('lens_circle_falloff', '5.0') - camera_bp.set_attribute('chromatic_aberration_intensity', '0.5') - camera_bp.set_attribute('chromatic_aberration_offset', '0.0') + camera_bp.set_attribute("lens_circle_multiplier", "0.0") + camera_bp.set_attribute("lens_circle_falloff", "5.0") + camera_bp.set_attribute("chromatic_aberration_intensity", "0.5") + camera_bp.set_attribute("chromatic_aberration_offset", "0.0") IMAGE_WIDTH = 1280 IMAGE_HEIGHT = 720 # set resolution - camera_bp.set_attribute('image_size_x', str(IMAGE_WIDTH)) - camera_bp.set_attribute('image_size_y', str(IMAGE_HEIGHT)) + camera_bp.set_attribute("image_size_x", str(IMAGE_WIDTH)) + camera_bp.set_attribute("image_size_y", str(IMAGE_HEIGHT)) - instance_camera_bp.set_attribute('sensor_tick', '1.0') + instance_camera_bp.set_attribute("sensor_tick", "1.0") # set resolution - instance_camera_bp.set_attribute('image_size_x', str(IMAGE_WIDTH)) - instance_camera_bp.set_attribute('image_size_y', str(IMAGE_HEIGHT)) + instance_camera_bp.set_attribute("image_size_x", str(IMAGE_WIDTH)) + instance_camera_bp.set_attribute("image_size_y", str(IMAGE_HEIGHT)) - camera_init_transform = carla.Transform( - carla.Location(z=1.7) - ) + camera_init_transform = carla.Transform(carla.Location(z=1.7)) for i, direction in enumerate(self.directions): print("Creating camera {}".format(direction)) - camera_bp.set_attribute('role_name', direction) - instance_camera_bp.set_attribute('role_name', direction) + camera_bp.set_attribute("role_name", direction) + instance_camera_bp.set_attribute("role_name", direction) # add rotation to camera transform camera_init_transform.rotation.yaw = i * 90 # create camera @@ -152,20 +150,19 @@ def setup_cameras(self, world, ego_vehicle): camera_bp, camera_init_transform, attach_to=ego_vehicle, - attachment_type=carla.AttachmentType.Rigid + attachment_type=carla.AttachmentType.Rigid, ) # create instance segmentation camera instance_camera = world.spawn_actor( instance_camera_bp, camera_init_transform, attach_to=ego_vehicle, - attachment_type=carla.AttachmentType.Rigid + attachment_type=carla.AttachmentType.Rigid, ) - camera.listen( - lambda image, dir=direction: self.save_image(image, dir)) + camera.listen(lambda image, dir=direction: self.save_image(image, dir)) instance_camera.listen( - lambda image, dir=direction: self.save_segmented_image( - image, dir)) + lambda image, dir=direction: self.save_segmented_image(image, dir) + ) self.cameras.append(camera) self.instance_cameras.append(instance_camera) @@ -179,17 +176,11 @@ def save_images_worker(self, direction, stop_event): image = image_queue.get() if counter < 2500: image.save_to_disk( - '{}/rgb/{}/{}.png'.format( - output_dir, direction, - counter - ) + "{}/rgb/{}/{}.png".format(output_dir, direction, counter) ) instance_image = instance_image_queue.get() instance_image.save_to_disk( - '{}/instance/{}/{}.png'.format( - output_dir, direction, - counter - ) + "{}/instance/{}/{}.png".format(output_dir, direction, counter) ) counter += 1 @@ -199,8 +190,9 @@ def start_saving_images(self): for direction in self.directions: thread_stop_event = threading.Event() self.thread_stop_events.append(thread_stop_event) - t = Thread(target=self.save_images_worker, - args=(direction, thread_stop_event)) + t = Thread( + target=self.save_images_worker, args=(direction, thread_stop_event) + ) self.threads.append(t) t.start() @@ -211,53 +203,51 @@ def stop_saving_images(self): t.join() -def find_ego_vehicle(world, role_name='hero'): - if world.get_actors().filter('vehicle.*'): +def find_ego_vehicle(world, role_name="hero"): + if world.get_actors().filter("vehicle.*"): # get ego vehicle with hero role - for actor in world.get_actors().filter('vehicle.*'): - if actor.attributes['role_name'] == role_name: + for actor in world.get_actors().filter("vehicle.*"): + if actor.attributes["role_name"] == role_name: return actor def create_argparse(): - argparser = argparse.ArgumentParser( - description='CARLA Dataset Generator') - argparser.add_argument( - '--output-dir', - metavar='DIR', - default='output', - help='Path to the output directory') + argparser = argparse.ArgumentParser(description="CARLA Dataset Generator") argparser.add_argument( - '--host', - metavar='H', - default='localhost', - help='host of the carla server' + "--output-dir", + metavar="DIR", + default="output", + help="Path to the output directory", ) argparser.add_argument( - '--port', - metavar='P', - default=2000, - type=int, - help='port of the carla server' + "--host", metavar="H", default="localhost", help="host of the carla server" ) argparser.add_argument( - '--use-empty-world', - action='store_true', - help='set up an empty world and spawn ego vehicle', - default=False + "--port", metavar="P", default=2000, type=int, help="port of the carla server" ) argparser.add_argument( - '--town', - metavar='T', - default='Town12', - help='town to load' + "--use-empty-world", + action="store_true", + help="set up an empty world and spawn ego vehicle", + default=False, ) + argparser.add_argument("--town", metavar="T", default="Town12", help="town to load") return argparser -if __name__ == '__main__': - towns = {"Town01", "Town02", "Town03", "Town04", "Town05", "Town06", - "Town07", "Town10", "Town11", "Town12"} +if __name__ == "__main__": + towns = { + "Town01", + "Town02", + "Town03", + "Town04", + "Town05", + "Town06", + "Town07", + "Town10", + "Town11", + "Town12", + } argparser = create_argparse() args = argparser.parse_args() town = args.town @@ -280,7 +270,7 @@ def create_argparse(): ego_vehicle = find_ego_vehicle(world) if not ego_vehicle: - raise RuntimeError('No vehicle found in the world') + raise RuntimeError("No vehicle found in the world") dataset_generator = DatasetGenerator(output_dir) dataset_generator.setup_cameras(world, ego_vehicle) diff --git a/code/perception/src/experiments/Position_Heading_Datasets/viz.py b/code/perception/src/experiments/Position_Heading_Datasets/viz.py index bee260b8..decdd2ca 100644 --- a/code/perception/src/experiments/Position_Heading_Datasets/viz.py +++ b/code/perception/src/experiments/Position_Heading_Datasets/viz.py @@ -27,7 +27,8 @@ # region PLOTS -def plot_best_tuned_file_by_type(type='x', error_type='MSE', check_type='IQR'): + +def plot_best_tuned_file_by_type(type="x", error_type="MSE", check_type="IQR"): """ Calculates the best tuned file by type and error type using a specific check type. @@ -52,37 +53,34 @@ def plot_best_tuned_file_by_type(type='x', error_type='MSE', check_type='IQR'): else: file_name = "data_" + str(i) + ".csv" - ideal, test_filter, current, unfiltered = ( - get_x_or_y_or_h_from_csv_file(file_name, type)) + ideal, test_filter, current, unfiltered = get_x_or_y_or_h_from_csv_file( + file_name, type + ) # calculate the error for each method by error_type - if error_type == 'MSE': + if error_type == "MSE": # Calculate the MSE for each method val_test_filter, test_filter_list = calculate_mse_x_or_y_or_h( - ideal, - test_filter) - val_current, current_list = calculate_mse_x_or_y_or_h( - ideal, - current) + ideal, test_filter + ) + val_current, current_list = calculate_mse_x_or_y_or_h(ideal, current) val_unfiltered, unfiltered_list = calculate_mse_x_or_y_or_h( - ideal, - unfiltered) - elif error_type == 'MAE': + ideal, unfiltered + ) + elif error_type == "MAE": # Calculate the MAE for each method val_test_filter, test_filter_list = calculate_mae_x_or_y_or_h( - ideal, - test_filter) - val_current, current_list = calculate_mae_x_or_y_or_h( - ideal, - current) + ideal, test_filter + ) + val_current, current_list = calculate_mae_x_or_y_or_h(ideal, current) val_unfiltered, unfiltered_list = calculate_mae_x_or_y_or_h( - ideal, - unfiltered) + ideal, unfiltered + ) vals = [val_test_filter, val_current, val_unfiltered] # evaluate the best tuned file by check_type - if check_type == 'IQR': + if check_type == "IQR": q3, q1 = np.percentile(test_filter_list, [80, 0]) iqr = q3 - q1 if i == FILE_START: @@ -95,7 +93,7 @@ def plot_best_tuned_file_by_type(type='x', error_type='MSE', check_type='IQR'): if abs(iqr) < abs(best_val): best_val = iqr best_file = file_name - elif check_type == 'default': + elif check_type == "default": if i == FILE_START: best_val = vals[0] best_file = file_name @@ -114,7 +112,7 @@ def plot_best_tuned_file_by_type(type='x', error_type='MSE', check_type='IQR'): return -def plot_x_or_y_or_h_notched_box(file_name, type='x', error_type='MSE'): +def plot_x_or_y_or_h_notched_box(file_name, type="x", error_type="MSE"): """ Calculates and plots the error of x, y or heading data for any given error type. @@ -127,38 +125,33 @@ def plot_x_or_y_or_h_notched_box(file_name, type='x', error_type='MSE'): Returns: """ - if type == 'x': - ideal, test_filter, current, unfiltered = ( - get_x_or_y_or_h_from_csv_file(file_name, 'x')) - elif type == 'y': - ideal, test_filter, current, unfiltered = ( - get_x_or_y_or_h_from_csv_file(file_name, 'y')) - elif type == 'h': - ideal, test_filter, current, unfiltered = ( - get_x_or_y_or_h_from_csv_file(file_name, 'h')) - - if error_type == 'MSE': + if type == "x": + ideal, test_filter, current, unfiltered = get_x_or_y_or_h_from_csv_file( + file_name, "x" + ) + elif type == "y": + ideal, test_filter, current, unfiltered = get_x_or_y_or_h_from_csv_file( + file_name, "y" + ) + elif type == "h": + ideal, test_filter, current, unfiltered = get_x_or_y_or_h_from_csv_file( + file_name, "h" + ) + + if error_type == "MSE": # Calculate the MSE for each method val_test_filter, test_filter_list = calculate_mse_x_or_y_or_h( - ideal, - test_filter) - val_current, current_list = calculate_mse_x_or_y_or_h( - ideal, - current) - val_unfiltered, unfiltered_list = calculate_mse_x_or_y_or_h( - ideal, - unfiltered) - elif error_type == 'MAE': + ideal, test_filter + ) + val_current, current_list = calculate_mse_x_or_y_or_h(ideal, current) + val_unfiltered, unfiltered_list = calculate_mse_x_or_y_or_h(ideal, unfiltered) + elif error_type == "MAE": # Calculate the MAE for each method val_test_filter, test_filter_list = calculate_mae_x_or_y_or_h( - ideal, - test_filter) - val_current, current_list = calculate_mae_x_or_y_or_h( - ideal, - current) - val_unfiltered, unfiltered_list = calculate_mae_x_or_y_or_h( - ideal, - unfiltered) + ideal, test_filter + ) + val_current, current_list = calculate_mae_x_or_y_or_h(ideal, current) + val_unfiltered, unfiltered_list = calculate_mae_x_or_y_or_h(ideal, unfiltered) # Create a new figure fig, ax = plt.subplots() @@ -166,37 +159,53 @@ def plot_x_or_y_or_h_notched_box(file_name, type='x', error_type='MSE'): # Create a list of all errors error_list = [test_filter_list, current_list, unfiltered_list] # Create a box plot with notches - boxplot = ax.boxplot(error_list, notch=True, - labels=['Test Filter', 'Current', 'Unfiltered'], - patch_artist=True) + boxplot = ax.boxplot( + error_list, + notch=True, + labels=["Test Filter", "Current", "Unfiltered"], + patch_artist=True, + ) # fill with colors and put median vals in the boxes - colors = ['pink', 'lightblue', 'lightgreen'] + colors = ["pink", "lightblue", "lightgreen"] - tuple = zip(boxplot['boxes'], colors, boxplot['medians'], - boxplot['whiskers'][::2]) + tuple = zip(boxplot["boxes"], colors, boxplot["medians"], boxplot["whiskers"][::2]) for i, (box, color, median, whiskers) in enumerate(tuple): box.set_facecolor(color) median_val = median.get_ydata()[1] - ax.text(i+1, median_val, f'Median: {median_val:.2f}', va='center', - ha='center', backgroundcolor='white') + ax.text( + i + 1, + median_val, + f"Median: {median_val:.2f}", + va="center", + ha="center", + backgroundcolor="white", + ) # Calculate IQR q3, q1 = np.percentile(error_list[i], [75, 0]) iqr = q3 - q1 # Get the y position for the IQR text - median_y = boxplot['medians'][i].get_ydata()[0] # height of the notch + median_y = boxplot["medians"][i].get_ydata()[0] # height of the notch # Add the IQR text - ax.text(i+0.8, median_y, f'IQR: {iqr:.2f}', va='center', - ha='center', rotation=90, color='red', backgroundcolor='white') + ax.text( + i + 0.8, + median_y, + f"IQR: {iqr:.2f}", + va="center", + ha="center", + rotation=90, + color="red", + backgroundcolor="white", + ) # Set the labels - ax.set_xlabel('Filter') + ax.set_xlabel("Filter") ax.set_ylabel(error_type) - ax.set_title(error_type + ' of ' + type + ' for different methods') + ax.set_title(error_type + " of " + type + " for different methods") ax.yaxis.grid(True) # Show the plot @@ -215,15 +224,13 @@ def plot_MSE_notched_box(file_name): float: Mean Squared Error (MSE). """ ideal_pos, test_filter_pos, current_pos, unfiltered_pos = ( - get_positions_from_csv_file(file_name)) + get_positions_from_csv_file(file_name) + ) # Calculate the MSE for each method - val_test_filter, test_filter_list = calculate_mse_pos(ideal_pos, - test_filter_pos) - val_current, current_list = calculate_mse_pos(ideal_pos, - current_pos) - val_unfiltered, unfiltered_list = calculate_mse_pos(ideal_pos, - unfiltered_pos) + val_test_filter, test_filter_list = calculate_mse_pos(ideal_pos, test_filter_pos) + val_current, current_list = calculate_mse_pos(ideal_pos, current_pos) + val_unfiltered, unfiltered_list = calculate_mse_pos(ideal_pos, unfiltered_pos) # Create a new figure fig, ax = plt.subplots() @@ -231,23 +238,33 @@ def plot_MSE_notched_box(file_name): # Create a list of all positions pos_mse_list = [test_filter_list, current_list, unfiltered_list] # Create a box plot with notches - boxplot = ax.boxplot(pos_mse_list, notch=True, - labels=['Test Filter', 'Current', 'Unfiltered'], - patch_artist=True) + boxplot = ax.boxplot( + pos_mse_list, + notch=True, + labels=["Test Filter", "Current", "Unfiltered"], + patch_artist=True, + ) # fill with colors and put median vals in the boxes - colors = ['pink', 'lightblue', 'lightgreen'] - for i, (box, color, median) in enumerate(zip(boxplot['boxes'], colors, - boxplot['medians'])): + colors = ["pink", "lightblue", "lightgreen"] + for i, (box, color, median) in enumerate( + zip(boxplot["boxes"], colors, boxplot["medians"]) + ): box.set_facecolor(color) median_val = median.get_ydata()[1] - ax.text(i+1, median_val, f'Median: {median_val:.2f}', va='center', - ha='center', backgroundcolor='white') + ax.text( + i + 1, + median_val, + f"Median: {median_val:.2f}", + va="center", + ha="center", + backgroundcolor="white", + ) # Set the labels - ax.set_xlabel('Method') - ax.set_ylabel('MSE') - ax.set_title('MSE for different methods') + ax.set_xlabel("Method") + ax.set_ylabel("MSE") + ax.set_title("MSE for different methods") ax.yaxis.grid(True) # Show the plot @@ -266,15 +283,13 @@ def plot_MAE_notched_box(file_name): float: Mean Absolute Error (MAE). """ ideal_pos, test_filter_pos, current_pos, unfiltered_pos = ( - get_positions_from_csv_file(file_name)) + get_positions_from_csv_file(file_name) + ) # Calculate the MAE for each method - mae_test_filter, test_filter_list = calculate_mae_pos(ideal_pos, - test_filter_pos) - mae_current, current_list = calculate_mae_pos(ideal_pos, - current_pos) - mae_unfiltered, unfiltered_list = calculate_mae_pos(ideal_pos, - unfiltered_pos) + mae_test_filter, test_filter_list = calculate_mae_pos(ideal_pos, test_filter_pos) + mae_current, current_list = calculate_mae_pos(ideal_pos, current_pos) + mae_unfiltered, unfiltered_list = calculate_mae_pos(ideal_pos, unfiltered_pos) # Create a new figure fig, ax = plt.subplots() @@ -283,23 +298,33 @@ def plot_MAE_notched_box(file_name): pos_mae_list = [test_filter_list, current_list, unfiltered_list] # Create a box plot with notches - boxplot = ax.boxplot(pos_mae_list, notch=True, - labels=['Test Filter', 'Current', 'Unfiltered'], - patch_artist=True) + boxplot = ax.boxplot( + pos_mae_list, + notch=True, + labels=["Test Filter", "Current", "Unfiltered"], + patch_artist=True, + ) # fill with colors and put median vals in the boxes - colors = ['pink', 'lightblue', 'lightgreen'] - for i, (box, color, median) in enumerate(zip(boxplot['boxes'], colors, - boxplot['medians'])): + colors = ["pink", "lightblue", "lightgreen"] + for i, (box, color, median) in enumerate( + zip(boxplot["boxes"], colors, boxplot["medians"]) + ): box.set_facecolor(color) median_val = median.get_ydata()[1] - ax.text(i+1, median_val, f'Median: {median_val:.2f}', va='center', - ha='center', backgroundcolor='white') + ax.text( + i + 1, + median_val, + f"Median: {median_val:.2f}", + va="center", + ha="center", + backgroundcolor="white", + ) # Set the labels - ax.set_xlabel('Method') - ax.set_ylabel('MAE') - ax.set_title('MAE for different methods') + ax.set_xlabel("Method") + ax.set_ylabel("MAE") + ax.set_title("MAE for different methods") ax.yaxis.grid(True) # Show the plot @@ -327,31 +352,30 @@ def plot_CEP(file_name): df_y.set_index(df_y.columns[0], inplace=True) # create pos tuples of the x and y data and store them as numpy arrays - ideal_pos = np.array(list(zip(df_x['Ideal (Carla)'], - df_y['Ideal (Carla)']))) - test_filter_pos = np.array(list(zip(df_x['Test Filter'], - df_y['Test Filter']))) - current_pos = np.array(list(zip(df_x['Current'], - df_y['Current']))) - unfiltered_pos = np.array(list(zip(df_x['Unfiltered'], - df_y['Unfiltered']))) + ideal_pos = np.array(list(zip(df_x["Ideal (Carla)"], df_y["Ideal (Carla)"]))) + test_filter_pos = np.array(list(zip(df_x["Test Filter"], df_y["Test Filter"]))) + current_pos = np.array(list(zip(df_x["Current"], df_y["Current"]))) + unfiltered_pos = np.array(list(zip(df_x["Unfiltered"], df_y["Unfiltered"]))) # create CEP for each method cep_test_filter, cep_current, cep_unfiltered = calculate_cep( - ideal_pos, test_filter_pos, current_pos, unfiltered_pos) + ideal_pos, test_filter_pos, current_pos, unfiltered_pos + ) # plot the cep as error circles of different colors in the x-y plane # Create a new figure fig, ax = plt.subplots() # Create circles with the given radii - circle_test_filter = plt.Circle((0, 0), cep_test_filter, fill=False, - label='Test Filter', - color='r') - circle_current = plt.Circle((0, 0), cep_current, fill=False, - label='Current', color='g') - circle_unfiltered = plt.Circle((0, 0), cep_unfiltered, fill=False, - label='Unfiltered', color='b') + circle_test_filter = plt.Circle( + (0, 0), cep_test_filter, fill=False, label="Test Filter", color="r" + ) + circle_current = plt.Circle( + (0, 0), cep_current, fill=False, label="Current", color="g" + ) + circle_unfiltered = plt.Circle( + (0, 0), cep_unfiltered, fill=False, label="Unfiltered", color="b" + ) # Add the circles to the plot ax.add_artist(circle_test_filter) @@ -359,10 +383,14 @@ def plot_CEP(file_name): ax.add_artist(circle_unfiltered) # Set the limits of the plot to show all circles - ax.set_xlim(-max(cep_test_filter, cep_current, cep_unfiltered), - max(cep_test_filter, cep_current, cep_unfiltered)) - ax.set_ylim(-max(cep_test_filter, cep_current, cep_unfiltered), - max(cep_test_filter, cep_current, cep_unfiltered)) + ax.set_xlim( + -max(cep_test_filter, cep_current, cep_unfiltered), + max(cep_test_filter, cep_current, cep_unfiltered), + ) + ax.set_ylim( + -max(cep_test_filter, cep_current, cep_unfiltered), + max(cep_test_filter, cep_current, cep_unfiltered), + ) # Add a legend plt.legend() @@ -371,33 +399,33 @@ def plot_CEP(file_name): plt.grid(True) # Set the y-axis label to 'Distance in Meters' - plt.ylabel('Distance in Meters') + plt.ylabel("Distance in Meters") # Set the x-axis label to 'Distance in Meters' - plt.xlabel('Distance in Meters') + plt.xlabel("Distance in Meters") -def plot_csv_x_or_y(file_name, type='x'): +def plot_csv_x_or_y(file_name, type="x"): """ Plots the x or y data from a CSV file. Parameters: file_name (str): The name of the CSV file. """ - if type == 'x': + if type == "x": file_path = folder_path_x + file_name - elif type == 'y': + elif type == "y": file_path = folder_path_y + file_name # Read the CSV file into a DataFrame df = pd.read_csv(file_path) # Plot the 'test_filter' (blue) and 'current' (green) - plt.plot(df['Test Filter'], 'b-', label='Test Filter') - plt.plot(df['Current'], 'g-', label='Current') + plt.plot(df["Test Filter"], "b-", label="Test Filter") + plt.plot(df["Current"], "g-", label="Current") # Plot the 'ideal' column with a red dotted line - plt.plot(df['Ideal (Carla)'], 'r:', label='Ideal') + plt.plot(df["Ideal (Carla)"], "r:", label="Ideal") # Display the legend plt.legend() @@ -409,12 +437,12 @@ def plot_csv_x_or_y(file_name, type='x'): # Set the y- # axis label to 'Distance in Meters' - plt.ylabel('Distance in Meters') + plt.ylabel("Distance in Meters") # Set the x-axis label to 'Time' - plt.xlabel('Time in seconds') + plt.xlabel("Time in seconds") - plt.title(type + ' Positions in Meters') + plt.title(type + " Positions in Meters") def plot_csv_heading(file_name): @@ -432,11 +460,11 @@ def plot_csv_heading(file_name): # Plot the 'test_filter_heading' (blue) and 'current_heading' (green) # line style - plt.plot(df['Test Filter'], 'b-', label='Test Filter Heading') - plt.plot(df['Current'], 'g-', label='Current Heading') + plt.plot(df["Test Filter"], "b-", label="Test Filter Heading") + plt.plot(df["Current"], "g-", label="Current Heading") # Plot the 'ideal_heading' column with a blue dotted line - plt.plot(df['Ideal (Carla)'], 'r:', label='Ideal Heading') + plt.plot(df["Ideal (Carla)"], "r:", label="Ideal Heading") # Display the legend plt.legend() # Plot the DataFrame @@ -446,10 +474,10 @@ def plot_csv_heading(file_name): plt.grid(True) # Set the y-axis label to 'Radians' - plt.ylabel('Heading in Radians') + plt.ylabel("Heading in Radians") # Set the x-axis label to 'Time' - plt.xlabel('Time in seconds') + plt.xlabel("Time in seconds") def plot_csv_positions(file_name): @@ -462,22 +490,19 @@ def plot_csv_positions(file_name): Returns: """ # Read the CSV file into a DataFrame - ideal, test_filter, current, unfiltered = ( - get_positions_from_csv_file(file_name)) + ideal, test_filter, current, unfiltered = get_positions_from_csv_file(file_name) ideal_x, ideal_y = zip(*ideal) test_filter_x, test_filter_y = zip(test_filter) current_x, current_y = zip(current) unfiltered_x, unfiltered_y = zip(unfiltered) - plt.plot(ideal_x, ideal_y, marker=',', - color='red', label='Ideal') - plt.plot(test_filter_x, test_filter_y, marker='.', - color='blue', label='Test Filter') - plt.plot(current_x, current_y, marker='.', - color='green', label='Current') - plt.plot(unfiltered_x, unfiltered_y, marker='.', - color='purple', label='Unfiltered') + plt.plot(ideal_x, ideal_y, marker=",", color="red", label="Ideal") + plt.plot( + test_filter_x, test_filter_y, marker=".", color="blue", label="Test Filter" + ) + plt.plot(current_x, current_y, marker=".", color="green", label="Current") + plt.plot(unfiltered_x, unfiltered_y, marker=".", color="purple", label="Unfiltered") # Display the legend plt.legend() @@ -486,12 +511,14 @@ def plot_csv_positions(file_name): plt.grid(True) # Set the y-axis label to 'Y Position in Meters' - plt.ylabel('Y Position in Meters') + plt.ylabel("Y Position in Meters") # Set the x-axis label to 'X Position in Meters' - plt.xlabel('X Position in Meters') + plt.xlabel("X Position in Meters") + + plt.title("X and Y Positions in Meters") + - plt.title('X and Y Positions in Meters') # endregion PLOTS @@ -528,7 +555,7 @@ def calculate_mse_pos(ideal, estimated): Tuple: A tuple containing the MSE and the error for each position. """ # Calculate the errors - error = np.linalg.norm(ideal - estimated, axis=1)**2 + error = np.linalg.norm(ideal - estimated, axis=1) ** 2 # Calculate the MSE mse = np.mean(error) @@ -570,7 +597,7 @@ def calculate_mse_x_or_y_or_h(ideal, estimated): or heading. """ # Calculate the errors - error = (ideal - estimated)**2 + error = (ideal - estimated) ** 2 # Calculate the MSE mse = np.mean(error) @@ -597,9 +624,9 @@ def calculate_cep(ideal, test_filter, current, unfiltered, percentile=90): tuple: A tuple containing the CEP for each method. """ # Calculate the errors - error_test_filter = np.sqrt(np.sum((test_filter - ideal)**2, axis=1)) - error_current = np.sqrt(np.sum((current - ideal)**2, axis=1)) - error_unfiltered = np.sqrt(np.sum((unfiltered - ideal)**2, axis=1)) + error_test_filter = np.sqrt(np.sum((test_filter - ideal) ** 2, axis=1)) + error_current = np.sqrt(np.sum((current - ideal) ** 2, axis=1)) + error_unfiltered = np.sqrt(np.sum((unfiltered - ideal) ** 2, axis=1)) # Calculate the CEP for each method cep_test_filter = np.percentile(error_test_filter, percentile) @@ -608,6 +635,7 @@ def calculate_cep(ideal, test_filter, current, unfiltered, percentile=90): return cep_test_filter, cep_current, cep_unfiltered + # endregion CALCUATIONS @@ -642,19 +670,15 @@ def get_positions_from_csv_file(file_name, file_name_y=file_name): df_y.set_index(df_y.columns[0], inplace=True) # create pos tuples of the x and y data and store them as numpy arrays - ideal_pos = np.array(list(zip(df_x['Ideal (Carla)'], - df_y['Ideal (Carla)']))) - test_filter_pos = np.array(list(zip(df_x['Test Filter'], - df_y['Test Filter']))) - current_pos = np.array(list(zip(df_x['Current'], - df_y['Current']))) - unfiltered_pos = np.array(list(zip(df_x['Unfiltered'], - df_y['Unfiltered']))) + ideal_pos = np.array(list(zip(df_x["Ideal (Carla)"], df_y["Ideal (Carla)"]))) + test_filter_pos = np.array(list(zip(df_x["Test Filter"], df_y["Test Filter"]))) + current_pos = np.array(list(zip(df_x["Current"], df_y["Current"]))) + unfiltered_pos = np.array(list(zip(df_x["Unfiltered"], df_y["Unfiltered"]))) return ideal_pos, test_filter_pos, current_pos, unfiltered_pos -def get_x_or_y_or_h_from_csv_file(file_name, type='x'): +def get_x_or_y_or_h_from_csv_file(file_name, type="x"): """ Reads x,y or heading data from CSV files and returns them as numpy arrays. @@ -669,13 +693,13 @@ def get_x_or_y_or_h_from_csv_file(file_name, type='x'): - test_filter: The data estimated using test_filter filtering. - current: The data calculated using a running average. - - unfiltered: The unfiltered data. """ + - unfiltered: The unfiltered data.""" - if type == 'x': + if type == "x": file_path = folder_path_x + file_name - elif type == 'y': + elif type == "y": file_path = folder_path_y + file_name - elif type == 'h': + elif type == "h": file_path = folder_path_heading + file_name # Read the CSV file into a DataFrame @@ -685,26 +709,28 @@ def get_x_or_y_or_h_from_csv_file(file_name, type='x'): # Set the first column (time) as the index of the DataFrames df.set_index(df.columns[0], inplace=True) - if type == 'x': + if type == "x": # store x as numpy arrays - ideal = np.array(df['Ideal (Carla)']) - test_filter = np.array(df['Test Filter']) - current = np.array(df['Current']) - unfiltered = np.array(df['Unfiltered']) - elif type == 'y': + ideal = np.array(df["Ideal (Carla)"]) + test_filter = np.array(df["Test Filter"]) + current = np.array(df["Current"]) + unfiltered = np.array(df["Unfiltered"]) + elif type == "y": # store y as numpy arrays - ideal = np.array(df['Ideal (Carla)']) - test_filter = np.array(df['Test Filter']) - current = np.array(df['Current']) - unfiltered = np.array(df['Unfiltered']) - elif type == 'h': + ideal = np.array(df["Ideal (Carla)"]) + test_filter = np.array(df["Test Filter"]) + current = np.array(df["Current"]) + unfiltered = np.array(df["Unfiltered"]) + elif type == "h": # store heading as numpy arrays - ideal = np.array(df['Ideal (Carla)']) - test_filter = np.array(df['Test Filter']) - current = np.array(df['Current']) - unfiltered = np.array(df['Unfiltered']) + ideal = np.array(df["Ideal (Carla)"]) + test_filter = np.array(df["Test Filter"]) + current = np.array(df["Current"]) + unfiltered = np.array(df["Unfiltered"]) return ideal, test_filter, current, unfiltered + + # endregion helper methods @@ -714,11 +740,11 @@ def get_x_or_y_or_h_from_csv_file(file_name, type='x'): data = file_name plot_CEP(data) - plot_x_or_y_or_h_notched_box(data, type='x', error_type='MSE') - plot_x_or_y_or_h_notched_box(data, type='x', error_type='MAE') + plot_x_or_y_or_h_notched_box(data, type="x", error_type="MSE") + plot_x_or_y_or_h_notched_box(data, type="x", error_type="MAE") - plot_x_or_y_or_h_notched_box(data, type='y', error_type='MSE') - plot_x_or_y_or_h_notched_box(data, type='y', error_type='MAE') + plot_x_or_y_or_h_notched_box(data, type="y", error_type="MSE") + plot_x_or_y_or_h_notched_box(data, type="y", error_type="MAE") # plot_x_or_y_or_h_notched_box(data, type='h', error_type='MSE') # plot_x_or_y_or_h_notched_box(data, type='h', error_type='MAE') diff --git a/code/perception/src/global_plan_distance_publisher.py b/code/perception/src/global_plan_distance_publisher.py index 4c53ca71..6601c514 100755 --- a/code/perception/src/global_plan_distance_publisher.py +++ b/code/perception/src/global_plan_distance_publisher.py @@ -8,6 +8,7 @@ from perception.msg import Waypoint, LaneChange import math + # import rospy @@ -24,8 +25,7 @@ def __init__(self): :return: """ - super(GlobalPlanDistance, self).__init__('global_plan_distance' - '_publisher') + super(GlobalPlanDistance, self).__init__("global_plan_distance" "_publisher") self.loginfo("GlobalPlanDistance node started") # basic info @@ -42,23 +42,25 @@ def __init__(self): PoseStamped, "/paf/" + self.role_name + "/current_pos", self.update_position, - qos_profile=1) + qos_profile=1, + ) self.global_plan_subscriber = self.new_subscription( CarlaRoute, "/carla/" + self.role_name + "/global_plan", self.update_global_route, - qos_profile=1) + qos_profile=1, + ) self.waypoint_publisher = self.new_publisher( - Waypoint, - "/paf/" + self.role_name + "/waypoint_distance", - qos_profile=1) + Waypoint, "/paf/" + self.role_name + "/waypoint_distance", qos_profile=1 + ) self.lane_change_publisher = self.new_publisher( LaneChange, "/paf/" + self.role_name + "/lane_change_distance", - qos_profile=1) + qos_profile=1, + ) def update_position(self, pos): """ @@ -78,35 +80,38 @@ def distance(a, b): # points to navigate to if self.global_route is not None and self.global_route: - current_distance = distance(self.global_route[0].position, - self.current_pos.position) - next_distance = distance(self.global_route[1].position, - self.current_pos.position) + current_distance = distance( + self.global_route[0].position, self.current_pos.position + ) + next_distance = distance( + self.global_route[1].position, self.current_pos.position + ) # if the road option indicates an intersection, the distance to the # next waypoint is also the distance to the stop line if self.road_options[0] < 4: # print("publish waypoint") - self.waypoint_publisher.publish( - Waypoint(current_distance, True)) + self.waypoint_publisher.publish(Waypoint(current_distance, True)) self.lane_change_publisher.publish( - LaneChange(current_distance, False, self.road_options[0])) + LaneChange(current_distance, False, self.road_options[0]) + ) else: - self.waypoint_publisher.publish( - Waypoint(current_distance, False)) + self.waypoint_publisher.publish(Waypoint(current_distance, False)) if self.road_options[0] == 5 or self.road_options[0] == 6: self.lane_change_publisher.publish( - LaneChange(current_distance, True, - self.road_options[0])) + LaneChange(current_distance, True, self.road_options[0]) + ) # if we reached the next waypoint, pop it and the next point will # be published if current_distance < 2.5 or next_distance < current_distance: self.road_options.pop(0) self.global_route.pop(0) - if self.road_options[0] in {5, 6} and \ - self.road_options[0] == self.road_options[1]: + if ( + self.road_options[0] in {5, 6} + and self.road_options[0] == self.road_options[1] + ): self.road_options[1] = 4 print(f"next road option = {self.road_options[0]}") diff --git a/code/perception/src/kalman_filter.py b/code/perception/src/kalman_filter.py index 6a44bc22..c5370750 100755 --- a/code/perception/src/kalman_filter.py +++ b/code/perception/src/kalman_filter.py @@ -16,7 +16,7 @@ GPS_RUNNING_AVG_ARGS = 10 -''' +""" For more information see the documentation in: ../../doc/perception/kalman_filter.md @@ -67,7 +67,7 @@ self.Q = np.diag([0.0001, 0.0001, 0.00001, 0.00001, 0.000001, 0.00001]) The measurement covariance matrix R is defined as: self.R = np.diag([0.0007, 0.0007, 0, 0, 0, 0]) -''' +""" class KalmanFilter(CompatibleNode): @@ -77,14 +77,15 @@ class KalmanFilter(CompatibleNode): For more information see the documentation in: ../../doc/perception/kalman_filter.md """ + def __init__(self): """ Constructor / Setup :return: """ - super(KalmanFilter, self).__init__('kalman_filter_node') + super(KalmanFilter, self).__init__("kalman_filter_node") - self.loginfo('KalmanFilter node started') + self.loginfo("KalmanFilter node started") # basic info self.transformer = None # for coordinate transformation self.role_name = self.get_param("role_name", "hero") @@ -97,7 +98,7 @@ def __init__(self): self.initialized = False # state vector X - ''' + """ [ [initial_x], [initial_y], @@ -106,7 +107,7 @@ def __init__(self): [yaw], [omega_z], ] - ''' + """ self.x_est = np.zeros((6, 1)) # estimated state vector self.P_est = np.zeros((6, 6)) # estiamted state covariance matrix @@ -115,7 +116,7 @@ def __init__(self): self.P_pred = np.zeros((6, 6)) # Predicted state covariance matrix # Define state transition matrix - ''' + """ # [x ... ] # [y ... ] # [v_x ... ] @@ -128,27 +129,35 @@ def __init__(self): v_y = v_y yaw = yaw + omega_z * dt omega_z = omega_z - ''' - self.A = np.array([[1, 0, self.dt, 0, 0, 0], - [0, 1, 0, self.dt, 0, 0], - [0, 0, 1, 0, 0, 0], - [0, 0, 0, 1, 0, 0], - [0, 0, 0, 0, 1, self.dt], - [0, 0, 0, 0, 0, 1]]) + """ + self.A = np.array( + [ + [1, 0, self.dt, 0, 0, 0], + [0, 1, 0, self.dt, 0, 0], + [0, 0, 1, 0, 0, 0], + [0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 1, self.dt], + [0, 0, 0, 0, 0, 1], + ] + ) # Define measurement matrix - ''' + """ 1. GPS: x, y 2. Velocity: v_x, v_y 3. IMU: yaw, omega_z -> 6 measurements for a state vector of 6 - ''' - self.H = np.array([[1, 0, 0, 0, 0, 0], # x - [0, 1, 0, 0, 0, 0], # y - [0, 0, 1, 0, 0, 0], # v_x - [0, 0, 0, 1, 0, 0], # v_y - [0, 0, 0, 0, 1, 0], # yaw - [0, 0, 0, 0, 0, 1]]) # omega_z + """ + self.H = np.array( + [ + [1, 0, 0, 0, 0, 0], # x + [0, 1, 0, 0, 0, 0], # y + [0, 0, 1, 0, 0, 0], # v_x + [0, 0, 0, 1, 0, 0], # v_y + [0, 0, 0, 0, 1, 0], # yaw + [0, 0, 0, 0, 0, 1], + ] + ) # omega_z # Define Measurement Variables self.z_gps = np.zeros((2, 1)) # GPS measurements (x, y) @@ -167,25 +176,28 @@ def __init__(self): self.latitude = 0 # latitude of the current position - # Subscriber + # Subscriber # Initialize the subscriber for the OpenDrive Map self.map_sub = self.new_subscription( String, "/carla/" + self.role_name + "/OpenDRIVE", self.get_geoRef, - qos_profile=1) + qos_profile=1, + ) # Initialize the subscriber for the IMU Data self.imu_subscriber = self.new_subscription( Imu, "/carla/" + self.role_name + "/IMU", self.update_imu_data, - qos_profile=1) + qos_profile=1, + ) # Initialize the subscriber for the GPS Data self.gps_subscriber = self.new_subscription( NavSatFix, "/carla/" + self.role_name + "/GPS", self.update_gps_data, - qos_profile=1) + qos_profile=1, + ) # Initialize the subscriber for the unfiltered_pos in XYZ self.avg_z = np.zeros((GPS_RUNNING_AVG_ARGS, 1)) self.avg_gps_counter: int = 0 @@ -193,25 +205,25 @@ def __init__(self): PoseStamped, "/paf/" + self.role_name + "/unfiltered_pos", self.update_unfiltered_pos, - qos_profile=1) + qos_profile=1, + ) # Initialize the subscriber for the velocity self.velocity_subscriber = self.new_subscription( CarlaSpeedometer, "/carla/" + self.role_name + "/Speed", self.update_velocity, - qos_profile=1) + qos_profile=1, + ) - # Publisher + # Publisher # Initialize the publisher for the kalman-position self.kalman_position_publisher = self.new_publisher( - PoseStamped, - "/paf/" + self.role_name + "/kalman_pos", - qos_profile=1) + PoseStamped, "/paf/" + self.role_name + "/kalman_pos", qos_profile=1 + ) # Initialize the publisher for the kalman-heading self.kalman_heading_publisher = self.new_publisher( - Float32, - "/paf/" + self.role_name + "/kalman_heading", - qos_profile=1) + Float32, "/paf/" + self.role_name + "/kalman_heading", qos_profile=1 + ) def run(self): """ @@ -223,16 +235,20 @@ def run(self): rospy.sleep(1) rospy.sleep(1) - self.loginfo('KalmanFilter started its loop!') + self.loginfo("KalmanFilter started its loop!") # initialize the state vector x_est and the covariance matrix P_est # initial state vector x_0 - self.x_0 = np.array([[self.z_gps[0, 0]], - [self.z_gps[1, 0]], - [self.z_v[0, 0]], - [self.z_v[1, 0]], - [self.z_imu[0, 0]], - [self.z_imu[1, 0]]]) + self.x_0 = np.array( + [ + [self.z_gps[0, 0]], + [self.z_gps[1, 0]], + [self.z_v[0, 0]], + [self.z_v[1, 0]], + [self.z_imu[0, 0]], + [self.z_imu[1, 0]], + ] + ) self.x_est = np.copy(self.x_0) # estimated initial state vector self.P_est = np.eye(6) * 1 # estiamted initialstatecovariancematrix @@ -331,10 +347,12 @@ def update_imu_data(self, imu_data): orientation_w = imu_data.orientation.w # Calculate the heading based on the orientation given by the IMU - data_orientation_q = [orientation_x, - orientation_y, - orientation_z, - orientation_w] + data_orientation_q = [ + orientation_x, + orientation_y, + orientation_z, + orientation_w, + ] heading = quat_to_heading(data_orientation_q) @@ -417,8 +435,8 @@ def get_geoRef(self, opendrive: String): indexLatEnd = geoRefText.find(" ", indexLat) indexLonEnd = geoRefText.find(" ", indexLon) - latValue = float(geoRefText[indexLat + len(latString):indexLatEnd]) - lonValue = float(geoRefText[indexLon + len(lonString):indexLonEnd]) + latValue = float(geoRefText[indexLat + len(latString) : indexLatEnd]) + lonValue = float(geoRefText[indexLon + len(lonString) : indexLonEnd]) CoordinateTransformer.la_ref = latValue CoordinateTransformer.ln_ref = lonValue @@ -431,7 +449,7 @@ def main(args=None): Main function starts the node :param args: """ - roscomp.init('kalman_filter_node', args=args) + roscomp.init("kalman_filter_node", args=args) try: node = KalmanFilter() @@ -442,5 +460,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/code/perception/src/lidar_distance.py b/code/perception/src/lidar_distance.py index f5f7c964..04394ee2 100755 --- a/code/perception/src/lidar_distance.py +++ b/code/perception/src/lidar_distance.py @@ -4,20 +4,22 @@ import numpy as np import lidar_filter_utility from sensor_msgs.msg import PointCloud2 + # from mpl_toolkits.mplot3d import Axes3D # from itertools import combinations from sensor_msgs.msg import Image as ImageMsg from cv_bridge import CvBridge + # from matplotlib.colors import LinearSegmentedColormap -class LidarDistance(): - """ See doc/perception/lidar_distance_utility.md on - how to configute this node +class LidarDistance: + """See doc/perception/lidar_distance_utility.md on + how to configute this node """ def callback(self, data): - """ Callback function, filters a PontCloud2 message + """Callback function, filters a PontCloud2 message by restrictions defined in the launchfile. Publishes a Depth image for the specified camera angle. @@ -32,22 +34,21 @@ def callback(self, data): reconstruct_bit_mask_center = lidar_filter_utility.bounding_box( coordinates, max_x=np.inf, - min_x=0., + min_x=0.0, min_z=-1.6, ) - reconstruct_coordinates_center = \ - coordinates[reconstruct_bit_mask_center] + reconstruct_coordinates_center = coordinates[reconstruct_bit_mask_center] reconstruct_coordinates_xyz_center = np.array( lidar_filter_utility.remove_field_name( - reconstruct_coordinates_center, - 'intensity') - .tolist() + reconstruct_coordinates_center, "intensity" + ).tolist() ) dist_array_center = self.reconstruct_img_from_lidar( - reconstruct_coordinates_xyz_center, focus="Center") - dist_array_center_msg = \ - self.bridge.cv2_to_imgmsg(dist_array_center, - encoding="passthrough") + reconstruct_coordinates_xyz_center, focus="Center" + ) + dist_array_center_msg = self.bridge.cv2_to_imgmsg( + dist_array_center, encoding="passthrough" + ) dist_array_center_msg.header = data.header self.dist_array_center_publisher.publish(dist_array_center_msg) @@ -61,15 +62,15 @@ def callback(self, data): reconstruct_coordinates_back = coordinates[reconstruct_bit_mask_back] reconstruct_coordinates_xyz_back = np.array( lidar_filter_utility.remove_field_name( - reconstruct_coordinates_back, - 'intensity') - .tolist() + reconstruct_coordinates_back, "intensity" + ).tolist() ) dist_array_back = self.reconstruct_img_from_lidar( - reconstruct_coordinates_xyz_back, focus="Back") - dist_array_back_msg = \ - self.bridge.cv2_to_imgmsg(dist_array_back, - encoding="passthrough") + reconstruct_coordinates_xyz_back, focus="Back" + ) + dist_array_back_msg = self.bridge.cv2_to_imgmsg( + dist_array_back, encoding="passthrough" + ) dist_array_back_msg.header = data.header self.dist_array_back_publisher.publish(dist_array_back_msg) @@ -83,37 +84,34 @@ def callback(self, data): reconstruct_coordinates_left = coordinates[reconstruct_bit_mask_left] reconstruct_coordinates_xyz_left = np.array( lidar_filter_utility.remove_field_name( - reconstruct_coordinates_left, - 'intensity') - .tolist() + reconstruct_coordinates_left, "intensity" + ).tolist() ) dist_array_left = self.reconstruct_img_from_lidar( - reconstruct_coordinates_xyz_left, focus="Left") - dist_array_left_msg = \ - self.bridge.cv2_to_imgmsg(dist_array_left, - encoding="passthrough") + reconstruct_coordinates_xyz_left, focus="Left" + ) + dist_array_left_msg = self.bridge.cv2_to_imgmsg( + dist_array_left, encoding="passthrough" + ) dist_array_left_msg.header = data.header self.dist_array_left_publisher.publish(dist_array_left_msg) # Right reconstruct_bit_mask_right = lidar_filter_utility.bounding_box( - coordinates, - max_y=-0.0, - min_y=-np.inf, - min_z=-1.6 + coordinates, max_y=-0.0, min_y=-np.inf, min_z=-1.6 ) reconstruct_coordinates_right = coordinates[reconstruct_bit_mask_right] reconstruct_coordinates_xyz_right = np.array( lidar_filter_utility.remove_field_name( - reconstruct_coordinates_right, - 'intensity') - .tolist() + reconstruct_coordinates_right, "intensity" + ).tolist() ) dist_array_right = self.reconstruct_img_from_lidar( - reconstruct_coordinates_xyz_right, focus="Right") - dist_array_right_msg = \ - self.bridge.cv2_to_imgmsg(dist_array_right, - encoding="passthrough") + reconstruct_coordinates_xyz_right, focus="Right" + ) + dist_array_right_msg = self.bridge.cv2_to_imgmsg( + dist_array_right, encoding="passthrough" + ) dist_array_right_msg.header = data.header self.dist_array_right_publisher.publish(dist_array_right_msg) @@ -122,60 +120,51 @@ def listener(self): Initializes the node and it's publishers """ # run simultaneously. - rospy.init_node('lidar_distance') + rospy.init_node("lidar_distance") self.bridge = CvBridge() self.pub_pointcloud = rospy.Publisher( rospy.get_param( - '~point_cloud_topic', - '/carla/hero/' + rospy.get_namespace() + '_filtered' + "~point_cloud_topic", + "/carla/hero/" + rospy.get_namespace() + "_filtered", ), PointCloud2, - queue_size=10 + queue_size=10, ) # publisher for dist_array self.dist_array_center_publisher = rospy.Publisher( - rospy.get_param( - '~image_distance_topic', - '/paf/hero/Center/dist_array' - ), + rospy.get_param("~image_distance_topic", "/paf/hero/Center/dist_array"), ImageMsg, - queue_size=10 + queue_size=10, ) # publisher for dist_array self.dist_array_back_publisher = rospy.Publisher( - rospy.get_param( - '~image_distance_topic', - '/paf/hero/Back/dist_array' - ), + rospy.get_param("~image_distance_topic", "/paf/hero/Back/dist_array"), ImageMsg, - queue_size=10 + queue_size=10, ) # publisher for dist_array self.dist_array_left_publisher = rospy.Publisher( - rospy.get_param( - '~image_distance_topic', - '/paf/hero/Left/dist_array' - ), + rospy.get_param("~image_distance_topic", "/paf/hero/Left/dist_array"), ImageMsg, - queue_size=10 + queue_size=10, ) # publisher for dist_array self.dist_array_right_publisher = rospy.Publisher( - rospy.get_param( - '~image_distance_topic', - '/paf/hero/Right/dist_array' - ), + rospy.get_param("~image_distance_topic", "/paf/hero/Right/dist_array"), ImageMsg, - queue_size=10 + queue_size=10, ) - rospy.Subscriber(rospy.get_param('~source_topic', "/carla/hero/LIDAR"), - PointCloud2, self.callback) + rospy.Subscriber( + rospy.get_param("~source_topic", "/carla/hero/LIDAR"), + PointCloud2, + self.callback, + ) rospy.spin() @@ -214,45 +203,49 @@ def reconstruct_img_from_lidar(self, coordinates_xyz, focus): if focus == "Center": point = np.array([c[1], c[2], c[0], 1]) pixel = np.matmul(m, point) - x, y = int(pixel[0]/pixel[2]), int(pixel[1]/pixel[2]) + x, y = int(pixel[0] / pixel[2]), int(pixel[1] / pixel[2]) if x >= 0 and x <= 1280 and y >= 0 and y <= 720: - img[719-y][1279-x] = c[0] - dist_array[719-y][1279-x] = \ - np.array([c[0], c[1], c[2]], dtype=np.float32) + img[719 - y][1279 - x] = c[0] + dist_array[719 - y][1279 - x] = np.array( + [c[0], c[1], c[2]], dtype=np.float32 + ) # back depth image if focus == "Back": point = np.array([c[1], c[2], c[0], 1]) pixel = np.matmul(m, point) - x, y = int(pixel[0]/pixel[2]), int(pixel[1]/pixel[2]) + x, y = int(pixel[0] / pixel[2]), int(pixel[1] / pixel[2]) if x >= 0 and x <= 1280 and y >= 0 and y < 720: - img[y][1279-x] = -c[0] - dist_array[y][1279-x] = \ - np.array([-c[0], c[1], c[2]], dtype=np.float32) + img[y][1279 - x] = -c[0] + dist_array[y][1279 - x] = np.array( + [-c[0], c[1], c[2]], dtype=np.float32 + ) # left depth image if focus == "Left": point = np.array([c[0], c[2], c[1], 1]) pixel = np.matmul(m, point) - x, y = int(pixel[0]/pixel[2]), int(pixel[1]/pixel[2]) + x, y = int(pixel[0] / pixel[2]), int(pixel[1] / pixel[2]) if x >= 0 and x <= 1280 and y >= 0 and y <= 720: - img[719-y][1279-x] = c[1] - dist_array[y][1279-x] = \ - np.array([c[0], c[1], c[2]], dtype=np.float32) + img[719 - y][1279 - x] = c[1] + dist_array[y][1279 - x] = np.array( + [c[0], c[1], c[2]], dtype=np.float32 + ) # right depth image if focus == "Right": point = np.array([c[0], c[2], c[1], 1]) pixel = np.matmul(m, point) - x, y = int(pixel[0]/pixel[2]), int(pixel[1]/pixel[2]) + x, y = int(pixel[0] / pixel[2]), int(pixel[1] / pixel[2]) if x >= 0 and x < 1280 and y >= 0 and y < 720: - img[y][1279-x] = -c[1] - dist_array[y][1279-x] = \ - np.array([c[0], c[1], c[2]], dtype=np.float32) + img[y][1279 - x] = -c[1] + dist_array[y][1279 - x] = np.array( + [c[0], c[1], c[2]], dtype=np.float32 + ) return dist_array -if __name__ == '__main__': +if __name__ == "__main__": lidar_distance = LidarDistance() lidar_distance.listener() diff --git a/code/perception/src/lidar_filter_utility.py b/code/perception/src/lidar_filter_utility.py index 4cd50260..b5904e4b 100755 --- a/code/perception/src/lidar_filter_utility.py +++ b/code/perception/src/lidar_filter_utility.py @@ -3,9 +3,16 @@ # https://gist.github.com/bigsnarfdude/bbfdf343cc2fc818dc08b58c0e1374ae -def bounding_box(points, min_x=-np.inf, max_x=np.inf, min_y=-np.inf, - max_y=np.inf, min_z=-np.inf, max_z=np.inf): - """ Compute a bounding_box filter on the given points +def bounding_box( + points, + min_x=-np.inf, + max_x=np.inf, + min_y=-np.inf, + max_y=np.inf, + min_z=-np.inf, + max_z=np.inf, +): + """Compute a bounding_box filter on the given points Parameters ---------- @@ -31,9 +38,9 @@ def bounding_box(points, min_x=-np.inf, max_x=np.inf, min_y=-np.inf, """ - bound_x = np.logical_and(points['x'] > min_x, points['x'] < max_x) - bound_y = np.logical_and(points['y'] > min_y, points['y'] < max_y) - bound_z = np.logical_and(points['z'] > min_z, points['z'] < max_z) + bound_x = np.logical_and(points["x"] > min_x, points["x"] < max_x) + bound_y = np.logical_and(points["y"] > min_y, points["y"] < max_y) + bound_z = np.logical_and(points["z"] > min_z, points["z"] < max_z) bb_filter = bound_x & bound_y & bound_z @@ -42,7 +49,7 @@ def bounding_box(points, min_x=-np.inf, max_x=np.inf, min_y=-np.inf, # https://stackoverflow.com/questions/15575878/how-do-you-remove-a-column-from-a-structured-numpy-array def remove_field_name(a, name): - """ Removes a column from a structured numpy array + """Removes a column from a structured numpy array :param a: structured numoy array :param name: name of the column to remove diff --git a/code/perception/src/position_heading_filter_debug_node.py b/code/perception/src/position_heading_filter_debug_node.py index 25d4c901..0c1428f2 100755 --- a/code/perception/src/position_heading_filter_debug_node.py +++ b/code/perception/src/position_heading_filter_debug_node.py @@ -10,6 +10,7 @@ from ros_compatibility.node import CompatibleNode from geometry_msgs.msg import PoseStamped from std_msgs.msg import Float32, Header + # from tf.transformations import euler_from_quaternion from std_msgs.msg import Float32MultiArray import rospy @@ -26,6 +27,7 @@ class position_heading_filter_debug_node(CompatibleNode): Node publishes a filtered gps signal. This is achieved using a rolling average. """ + def __init__(self): """ Constructor / Setup @@ -33,15 +35,16 @@ def __init__(self): """ super(position_heading_filter_debug_node, self).__init__( - 'position_heading_filter_debug_node') + "position_heading_filter_debug_node" + ) # basic info self.role_name = self.get_param("role_name", "hero") self.control_loop_rate = self.get_param("control_loop_rate", "0.05") # carla attributes - CARLA_HOST = os.environ.get('CARLA_HOST', 'paf-carla-simulator-1') - CARLA_PORT = int(os.environ.get('CARLA_PORT', '2000')) + CARLA_HOST = os.environ.get("CARLA_HOST", "paf-carla-simulator-1") + CARLA_PORT = int(os.environ.get("CARLA_PORT", "2000")) self.client = carla.Client(CARLA_HOST, CARLA_PORT) self.world = None self.carla_car = None @@ -66,11 +69,11 @@ def __init__(self): # csv file attributes/ flags for plots self.csv_x_created = False - self.csv_file_path_x = '' + self.csv_file_path_x = "" self.csv_y_created = False - self.csv_file_path_y = '' + self.csv_file_path_y = "" self.csv_heading_created = False - self.csv_file_path_heading = '' + self.csv_file_path_heading = "" self.loginfo("Position Heading Filter Debug node started") @@ -81,40 +84,46 @@ def __init__(self): PoseStamped, f"/paf/{self.role_name}/current_pos", self.set_current_pos, - qos_profile=1) + qos_profile=1, + ) # Current_heading subscriber: self.current_heading_subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.set_current_heading, - qos_profile=1) + qos_profile=1, + ) # test_filter_pos subscriber: self.test_filter_pos_subscriber = self.new_subscription( PoseStamped, f"/paf/{self.role_name}/kalman_pos", self.set_test_filter_pos, - qos_profile=1) + qos_profile=1, + ) # test_filter_heading subscriber: self.test_filter_heading_subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/kalman_heading", self.set_test_filter_heading, - qos_profile=1) + qos_profile=1, + ) # Unfiltered_pos subscriber: self.unfiltered_pos_subscriber = self.new_subscription( PoseStamped, f"/paf/{self.role_name}/unfiltered_pos", self.set_unfiltered_pos, - qos_profile=1) + qos_profile=1, + ) # Unfiltered_heading subscriber: self.unfiltered_heading_subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/unfiltered_heading", self.set_unfiltered_heading, - qos_profile=1) + qos_profile=1, + ) # endregion Subscriber END @@ -122,24 +131,20 @@ def __init__(self): # ideal carla publisher for easier debug with rqt_plot self.carla_heading_publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/carla_current_heading", - qos_profile=1) + Float32, f"/paf/{self.role_name}/carla_current_heading", qos_profile=1 + ) self.carla_pos_publisher = self.new_publisher( - PoseStamped, - f"/paf/{self.role_name}/carla_current_pos", - qos_profile=1) + PoseStamped, f"/paf/{self.role_name}/carla_current_pos", qos_profile=1 + ) # Error Publisher self.position_debug_publisher = self.new_publisher( - Float32MultiArray, - f"/paf/{self.role_name}/position_debug", - qos_profile=1) + Float32MultiArray, f"/paf/{self.role_name}/position_debug", qos_profile=1 + ) self.heading_debug_publisher = self.new_publisher( - Float32MultiArray, - f"/paf/{self.role_name}/heading_debug", - qos_profile=1) + Float32MultiArray, f"/paf/{self.role_name}/heading_debug", qos_profile=1 + ) # endregion Publisher END @@ -198,10 +203,9 @@ def save_position_data(self): return # Specify the path to the folder where you want to save the data - base_path = ('/workspace/code/perception/' - 'src/experiments/' + FOLDER_PATH) - folder_path_x = base_path + '/x_error' - folder_path_y = base_path + '/y_error' + base_path = "/workspace/code/perception/" "src/experiments/" + FOLDER_PATH + folder_path_x = base_path + "/x_error" + folder_path_y = base_path + "/y_error" # Ensure the directories exist os.makedirs(folder_path_x, exist_ok=True) os.makedirs(folder_path_y, exist_ok=True) @@ -231,9 +235,8 @@ def save_heading_data(self): return # Specify the path to the folder where you want to save the data - base_path = ('/workspace/code/perception/' - 'src/experiments' + FOLDER_PATH) - folder_path_heading = base_path + '/heading_error' + base_path = "/workspace/code/perception/" "src/experiments" + FOLDER_PATH + folder_path_heading = base_path + "/heading_error" # Ensure the directories exist os.makedirs(folder_path_heading, exist_ok=True) @@ -248,81 +251,93 @@ def save_heading_data(self): # helper methods for writing into csv files def write_csv_heading(self): - with open(self.csv_file_path_heading, 'a', newline='') as file: + with open(self.csv_file_path_heading, "a", newline="") as file: writer = csv.writer(file) # Check if file is empty if os.stat(self.csv_file_path_heading).st_size == 0: - writer.writerow([ - "Time", - "Unfiltered", - "Ideal (Carla)", - "Current", - "Test Filter", - "Unfiltered Error", - "Current Error" - "Test Filter Error", - ]) - writer.writerow([rospy.get_time(), - self.unfiltered_heading.data, - self.carla_current_heading, - self.current_heading.data, - self.test_filter_heading.data, - self.heading_debug_data.data[0], - self.heading_debug_data.data[1], - self.heading_debug_data.data[2] - ]) + writer.writerow( + [ + "Time", + "Unfiltered", + "Ideal (Carla)", + "Current", + "Test Filter", + "Unfiltered Error", + "Current Error" "Test Filter Error", + ] + ) + writer.writerow( + [ + rospy.get_time(), + self.unfiltered_heading.data, + self.carla_current_heading, + self.current_heading.data, + self.test_filter_heading.data, + self.heading_debug_data.data[0], + self.heading_debug_data.data[1], + self.heading_debug_data.data[2], + ] + ) def write_csv_x(self): - with open(self.csv_file_path_x, 'a', newline='') as file: + with open(self.csv_file_path_x, "a", newline="") as file: writer = csv.writer(file) # Check if file is empty and add first row if os.stat(self.csv_file_path_x).st_size == 0: - writer.writerow([ - "Time", - "Unfiltered", - "Ideal (Carla)", - "Current", - "Test Filter", - "Unfiltered Error", - "Current Error", - "Test Filter Error" - ]) - writer.writerow([ - rospy.get_time(), - self.unfiltered_pos.pose.position.x, - self.carla_current_pos.x, - self.current_pos.pose.position.x, - self.test_filter_pos.pose.position.x, - self.position_debug_data.data[8], - self.position_debug_data.data[11], - self.position_debug_data.data[14] - ]) + writer.writerow( + [ + "Time", + "Unfiltered", + "Ideal (Carla)", + "Current", + "Test Filter", + "Unfiltered Error", + "Current Error", + "Test Filter Error", + ] + ) + writer.writerow( + [ + rospy.get_time(), + self.unfiltered_pos.pose.position.x, + self.carla_current_pos.x, + self.current_pos.pose.position.x, + self.test_filter_pos.pose.position.x, + self.position_debug_data.data[8], + self.position_debug_data.data[11], + self.position_debug_data.data[14], + ] + ) def write_csv_y(self): - with open(self.csv_file_path_y, 'a', newline='') as file: + with open(self.csv_file_path_y, "a", newline="") as file: writer = csv.writer(file) # Check if file is empty and add first row if os.stat(self.csv_file_path_y).st_size == 0: - writer.writerow([ - "Time", - "Unfiltered", - "Ideal (Carla)", - "Current", - "Test Filter", - "Unfiltered Error", - "Current Error", - "Test Filter Error" - ]) - writer.writerow([ - rospy.get_time(), - self.unfiltered_pos.pose.position.y, - self.carla_current_pos.y, - self.current_pos.pose.position.y, - self.test_filter_pos.pose.position.y, - self.position_debug_data.data[9], - self.position_debug_data.data[12], - self.position_debug_data.data[15] - ]) + writer.writerow( + [ + "Time", + "Unfiltered", + "Ideal (Carla)", + "Current", + "Test Filter", + "Unfiltered Error", + "Current Error", + "Test Filter Error", + ] + ) + writer.writerow( + [ + rospy.get_time(), + self.unfiltered_pos.pose.position.y, + self.carla_current_pos.y, + self.current_pos.pose.position.y, + self.test_filter_pos.pose.position.y, + self.position_debug_data.data[9], + self.position_debug_data.data[12], + self.position_debug_data.data[15], + ] + ) # endregion CSV data save methods @@ -331,7 +346,7 @@ def set_carla_attributes(self): This method sets the carla attributes. """ for actor in self.world.get_actors(): - if actor.attributes.get('role_name') == "hero": + if actor.attributes.get("role_name") == "hero": self.carla_car = actor break if self.carla_car is None: @@ -349,9 +364,8 @@ def set_carla_attributes(self): # -> convert to radians # -> also flip the sign to minus self.carla_current_heading = -math.radians( - self.carla_car.get_transform() - .rotation.yaw - ) + self.carla_car.get_transform().rotation.yaw + ) def position_debug(self): """ @@ -418,42 +432,37 @@ def position_debug(self): debug.data[7] = self.test_filter_pos.pose.position.y # error between carla_current_pos and unfiltered_pos - debug.data[8] = (self.carla_current_pos.x - - self.unfiltered_pos.pose.position.x) - debug.data[9] = (self.carla_current_pos.y - - self.unfiltered_pos.pose.position.y) - debug.data[10] = math.sqrt((self.carla_current_pos.x - - self.unfiltered_pos.pose.position.x)**2 - + (self.carla_current_pos.y - - self.unfiltered_pos.pose.position.y)**2) + debug.data[8] = self.carla_current_pos.x - self.unfiltered_pos.pose.position.x + debug.data[9] = self.carla_current_pos.y - self.unfiltered_pos.pose.position.y + debug.data[10] = math.sqrt( + (self.carla_current_pos.x - self.unfiltered_pos.pose.position.x) ** 2 + + (self.carla_current_pos.y - self.unfiltered_pos.pose.position.y) ** 2 + ) # error between carla_current_pos and current_pos - debug.data[11] = (self.carla_current_pos.x - - self.current_pos.pose.position.x) - debug.data[12] = (self.carla_current_pos.y - - self.current_pos.pose.position.y) - debug.data[13] = math.sqrt((self.carla_current_pos.x - - self.current_pos.pose.position.x)**2 - + (self.carla_current_pos.y - - self.current_pos.pose.position.y)**2) + debug.data[11] = self.carla_current_pos.x - self.current_pos.pose.position.x + debug.data[12] = self.carla_current_pos.y - self.current_pos.pose.position.y + debug.data[13] = math.sqrt( + (self.carla_current_pos.x - self.current_pos.pose.position.x) ** 2 + + (self.carla_current_pos.y - self.current_pos.pose.position.y) ** 2 + ) # error between carla_current_pos and test_filter_pos - debug.data[14] = (self.carla_current_pos.x - - self.test_filter_pos.pose.position.x) - debug.data[15] = (self.carla_current_pos.y - - self.test_filter_pos.pose.position.y) - debug.data[16] = math.sqrt((self.carla_current_pos.x - - self.test_filter_pos.pose.position.x)**2 - + (self.carla_current_pos.y - - self.test_filter_pos.pose.position.y)**2) + debug.data[14] = self.carla_current_pos.x - self.test_filter_pos.pose.position.x + debug.data[15] = self.carla_current_pos.y - self.test_filter_pos.pose.position.y + debug.data[16] = math.sqrt( + (self.carla_current_pos.x - self.test_filter_pos.pose.position.x) ** 2 + + (self.carla_current_pos.y - self.test_filter_pos.pose.position.y) ** 2 + ) self.position_debug_data = debug self.position_debug_publisher.publish(debug) # for easier debugging with rqt_plot # Publish carla Location as PoseStamped: - self.carla_pos_publisher.publish(carla_location_to_pose_stamped( - self.carla_current_pos)) + self.carla_pos_publisher.publish( + carla_location_to_pose_stamped(self.carla_current_pos) + ) def heading_debug(self): """ @@ -495,16 +504,13 @@ def heading_debug(self): debug.data[3] = self.test_filter_heading.data # error between carla_current_heading and unfiltered_heading - debug.data[4] = (self.carla_current_heading - - self.unfiltered_heading.data) + debug.data[4] = self.carla_current_heading - self.unfiltered_heading.data # error between carla_current_heading and current_heading - debug.data[5] = (self.carla_current_heading - - self.current_heading.data) + debug.data[5] = self.carla_current_heading - self.current_heading.data # error between carla_current_heading and test_filter_heading - debug.data[6] = (self.carla_current_heading - - self.test_filter_heading.data) + debug.data[6] = self.carla_current_heading - self.test_filter_heading.data self.heading_debug_data = debug self.heading_debug_publisher.publish(debug) @@ -566,16 +572,16 @@ def loop(): def create_file(folder_path): - ''' + """ This function creates a new csv file in the folder_path in correct sequence looking like data_00.csv, data_01.csv, ... and returns the path to the file. - ''' + """ i = 0 while True: - file_path = f'{folder_path}/data_{str(i).zfill(2)}.csv' + file_path = f"{folder_path}/data_{str(i).zfill(2)}.csv" if not os.path.exists(file_path): - with open(file_path, 'w', newline=''): + with open(file_path, "w", newline=""): pass return file_path i += 1 diff --git a/code/perception/src/position_heading_publisher_node.py b/code/perception/src/position_heading_publisher_node.py index f5b62e72..c54e7d23 100755 --- a/code/perception/src/position_heading_publisher_node.py +++ b/code/perception/src/position_heading_publisher_node.py @@ -7,6 +7,7 @@ from ros_compatibility.node import CompatibleNode from geometry_msgs.msg import PoseStamped from sensor_msgs.msg import NavSatFix, Imu + # from nav_msgs.msg import Odometry from std_msgs.msg import Float32, String from coordinate_transformation import CoordinateTransformer @@ -46,7 +47,8 @@ def __init__(self): """ super(PositionHeadingPublisherNode, self).__init__( - 'position_heading_publisher_node') + "position_heading_publisher_node" + ) """ Possible Filters: @@ -56,9 +58,12 @@ def __init__(self): # Filter used: self.pos_filter = self.get_param("pos_filter", "Kalman") self.heading_filter = self.get_param("heading_filter", "Kalman") - self.loginfo("position_heading_publisher_node started with Pos Filter:" - + self.pos_filter + - " and Heading Filter: " + self.heading_filter) + self.loginfo( + "position_heading_publisher_node started with Pos Filter:" + + self.pos_filter + + " and Heading Filter: " + + self.heading_filter + ) # basic info self.role_name = self.get_param("role_name", "hero") @@ -71,25 +76,28 @@ def __init__(self): self.avg_xyz = np.zeros((GPS_RUNNING_AVG_ARGS, 3)) self.avg_gps_counter: int = 0 - # region Subscriber START + # region Subscriber START self.map_sub = self.new_subscription( String, "/carla/" + self.role_name + "/OpenDRIVE", self.get_geoRef, - qos_profile=1) + qos_profile=1, + ) self.imu_subscriber = self.new_subscription( Imu, "/carla/" + self.role_name + "/IMU", self.publish_unfiltered_heading, - qos_profile=1) + qos_profile=1, + ) self.gps_subscriber = self.new_subscription( NavSatFix, "/carla/" + self.role_name + "/GPS", self.publish_unfiltered_gps, - qos_profile=1) + qos_profile=1, + ) # Create subscribers depending on the filter used # Pos Filter: @@ -98,13 +106,15 @@ def __init__(self): PoseStamped, "/paf/" + self.role_name + "/kalman_pos", self.publish_kalman_pos_as_current_pos, - qos_profile=1) + qos_profile=1, + ) elif self.pos_filter == "RunningAvg": self.gps_subscriber_for_running_avg = self.new_subscription( NavSatFix, "/carla/" + self.role_name + "/GPS", self.publish_running_avg_pos_as_current_pos, - qos_profile=1) + qos_profile=1, + ) elif self.pos_filter == "None": # No additional subscriber needed since the unfiltered GPS data is # subscribed by self.gps_subscriber @@ -118,7 +128,8 @@ def __init__(self): Float32, "/paf/" + self.role_name + "/kalman_heading", self.publish_current_heading, - qos_profile=1) + qos_profile=1, + ) elif self.heading_filter == "None": # No additional subscriber needed since the unfiltered heading # data is subscribed by self.imu_subscriber @@ -126,35 +137,31 @@ def __init__(self): # insert additional elifs for other filters here - # endregion Subscriber END + # endregion Subscriber END - # region Publisher START + # region Publisher START # Orientation self.unfiltered_heading_publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/unfiltered_heading", - qos_profile=1) + Float32, f"/paf/{self.role_name}/unfiltered_heading", qos_profile=1 + ) # 3D Odometry (GPS) for Filters self.unfiltered_gps_publisher = self.new_publisher( - PoseStamped, - f"/paf/{self.role_name}/unfiltered_pos", - qos_profile=1) + PoseStamped, f"/paf/{self.role_name}/unfiltered_pos", qos_profile=1 + ) # Publishes current_pos depending on the filter used self.cur_pos_publisher = self.new_publisher( - PoseStamped, - f"/paf/{self.role_name}/current_pos", - qos_profile=1) + PoseStamped, f"/paf/{self.role_name}/current_pos", qos_profile=1 + ) self.__heading: float = 0 self.__heading_publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/current_heading", - qos_profile=1) + Float32, f"/paf/{self.role_name}/current_heading", qos_profile=1 + ) # endregion Publisher END -# region HEADING FUNCTIONS + # region HEADING FUNCTIONS def publish_unfiltered_heading(self, data: Imu): """ This method is called when new IMU data is received. @@ -162,10 +169,12 @@ def publish_unfiltered_heading(self, data: Imu): :param data: new IMU measurement :return: """ - data_orientation_q = [data.orientation.x, - data.orientation.y, - data.orientation.z, - data.orientation.w] + data_orientation_q = [ + data.orientation.x, + data.orientation.y, + data.orientation.z, + data.orientation.w, + ] heading = quat_to_heading(data_orientation_q) @@ -205,9 +214,9 @@ def publish_current_heading(self, data: Float32): # insert new heading functions here... -# endregion HEADING FUNCTIONS END + # endregion HEADING FUNCTIONS END -# region POSITION FUNCTIONS + # region POSITION FUNCTIONS def publish_running_avg_pos_as_current_pos(self, data: NavSatFix): """ @@ -314,7 +323,7 @@ def publish_unfiltered_gps(self, data: NavSatFix): # insert new position functions here... -# endregion POSITION FUNCTIONS END + # endregion POSITION FUNCTIONS END def get_geoRef(self, opendrive: String): """_summary_ @@ -336,8 +345,8 @@ def get_geoRef(self, opendrive: String): indexLatEnd = geoRefText.find(" ", indexLat) indexLonEnd = geoRefText.find(" ", indexLon) - latValue = float(geoRefText[indexLat + len(latString):indexLatEnd]) - lonValue = float(geoRefText[indexLon + len(lonString):indexLonEnd]) + latValue = float(geoRefText[indexLat + len(latString) : indexLatEnd]) + lonValue = float(geoRefText[indexLon + len(lonString) : indexLonEnd]) CoordinateTransformer.la_ref = latValue CoordinateTransformer.ln_ref = lonValue diff --git a/code/perception/src/traffic_light_detection/src/data_generation/weights_organizer.py b/code/perception/src/traffic_light_detection/src/data_generation/weights_organizer.py index 5e4d9d29..01b64189 100644 --- a/code/perception/src/traffic_light_detection/src/data_generation/weights_organizer.py +++ b/code/perception/src/traffic_light_detection/src/data_generation/weights_organizer.py @@ -18,9 +18,11 @@ def __init__(self, cfg, model): try: os.makedirs(self.cfg.WEIGHTS_PATH, exist_ok=True) except FileExistsError: - sys.exit(f"The directory {self.cfg.WEIGHTS_PATH} already exists." - f"Cannot create weights-directory for training." - f"Try again in at least one minute.") + sys.exit( + f"The directory {self.cfg.WEIGHTS_PATH} already exists." + f"Cannot create weights-directory for training." + f"Try again in at least one minute." + ) def save(self, accuracy, val_accuracy): """ @@ -28,14 +30,18 @@ def save(self, accuracy, val_accuracy): @param accuracy: Accuracy of the model in the last epoch @param val_accuracy: Accuracy of the model on the validation-subset """ - filename = self.cfg.WEIGHTS_PATH + f"model_acc_{round(accuracy, 2)}" \ - + f"_val_{round(val_accuracy, 2)}.pt" + filename = ( + self.cfg.WEIGHTS_PATH + + f"model_acc_{round(accuracy, 2)}" + + f"_val_{round(val_accuracy, 2)}.pt" + ) if len(self.best) == 0: torch.save(self.model.state_dict(), filename) self.best.append((accuracy, val_accuracy, filename)) - elif val_accuracy > self.best[len(self.best) - 1][1] or \ - (val_accuracy >= self.best[len(self.best) - 1][1] and - accuracy > self.best[len(self.best) - 1][0]): + elif val_accuracy > self.best[len(self.best) - 1][1] or ( + val_accuracy >= self.best[len(self.best) - 1][1] + and accuracy > self.best[len(self.best) - 1][0] + ): if len(self.best) == 1: delete = self.best[0][2] diff --git a/code/perception/src/traffic_light_detection/src/traffic_light_config.py b/code/perception/src/traffic_light_detection/src/traffic_light_config.py index e1c720c2..39c63da1 100644 --- a/code/perception/src/traffic_light_detection/src/traffic_light_config.py +++ b/code/perception/src/traffic_light_detection/src/traffic_light_config.py @@ -7,7 +7,7 @@ class TrafficLightConfig: def __init__(self): # General settings - self.DEVICE = ('cuda' if torch.cuda.is_available() else 'cpu') + self.DEVICE = "cuda" if torch.cuda.is_available() else "cpu" self.TIME = datetime.now().strftime("%d.%m.%Y_%H.%M") # Training diff --git a/code/perception/src/traffic_light_detection/src/traffic_light_detection/classification_model.py b/code/perception/src/traffic_light_detection/src/traffic_light_detection/classification_model.py index f44d2c31..174c6519 100644 --- a/code/perception/src/traffic_light_detection/src/traffic_light_detection/classification_model.py +++ b/code/perception/src/traffic_light_detection/src/traffic_light_detection/classification_model.py @@ -12,17 +12,21 @@ def __init__(self, num_classes, in_channels=3): @param num_classes: Number of classes """ super(ClassificationModel, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=4, - kernel_size=5, padding='same') + self.conv1 = nn.Conv2d( + in_channels=in_channels, out_channels=4, kernel_size=5, padding="same" + ) self.batch_norm1 = nn.BatchNorm2d(num_features=4) - self.conv2 = nn.Conv2d(in_channels=4, out_channels=4, kernel_size=5, - padding='same') + self.conv2 = nn.Conv2d( + in_channels=4, out_channels=4, kernel_size=5, padding="same" + ) self.max_pool1 = nn.MaxPool2d(kernel_size=(2, 2)) - self.conv3 = nn.Conv2d(in_channels=4, out_channels=4, kernel_size=3, - padding='same') + self.conv3 = nn.Conv2d( + in_channels=4, out_channels=4, kernel_size=3, padding="same" + ) self.max_pool2 = nn.MaxPool2d(kernel_size=(2, 2)) - self.conv4 = nn.Conv2d(in_channels=4, out_channels=4, kernel_size=3, - padding='same') + self.conv4 = nn.Conv2d( + in_channels=4, out_channels=4, kernel_size=3, padding="same" + ) self.max_pool3 = nn.MaxPool2d(kernel_size=(2, 2)) self.flatten = nn.Flatten() self.dropout = nn.Dropout(p=0.3) @@ -63,6 +67,8 @@ def load_model(cfg): print(f"Pretrained model loaded from {path}") return model except Exception as e: - print(f"No pretrained model found at {path}: {e}\n" - f"Created new model with random weights.") + print( + f"No pretrained model found at {path}: {e}\n" + f"Created new model with random weights." + ) return model.eval() diff --git a/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_inference.py b/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_inference.py index cb3d9e5c..d9922795 100644 --- a/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_inference.py +++ b/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_inference.py @@ -2,10 +2,14 @@ import torch.cuda import torchvision.transforms as t -from traffic_light_detection.src.traffic_light_detection.transforms \ - import Normalize, ResizeAndPadToSquare, load_image -from traffic_light_detection.src.traffic_light_detection.classification_model \ - import ClassificationModel +from traffic_light_detection.src.traffic_light_detection.transforms import ( + Normalize, + ResizeAndPadToSquare, + load_image, +) +from traffic_light_detection.src.traffic_light_detection.classification_model import ( + ClassificationModel, +) from torchvision.transforms import ToTensor from traffic_light_detection.src.traffic_light_config import TrafficLightConfig @@ -15,18 +19,19 @@ def parse_args(): Parses arguments for execution given by the command line. @return: Parsed arguments """ - parser = argparse.ArgumentParser(description='Inference traffic light ' - 'detection') - parser.add_argument('--model', - default='/opt/project/code/perception/src/' - 'traffic_light_detection/models/' - '05.12.2022_17.47/' - 'model_acc_99.53_val_100.0.pt', - help='path to pretrained model', - type=str) - parser.add_argument('--image', default=None, - help='/dataset/val/green/green_83.png', - type=str) + parser = argparse.ArgumentParser(description="Inference traffic light " "detection") + parser.add_argument( + "--model", + default="/opt/project/code/perception/src/" + "traffic_light_detection/models/" + "05.12.2022_17.47/" + "model_acc_99.53_val_100.0.pt", + help="path to pretrained model", + type=str, + ) + parser.add_argument( + "--image", default=None, help="/dataset/val/green/green_83.png", type=str + ) return parser.parse_args() @@ -39,19 +44,17 @@ def __init__(self, model_path): """ self.cfg = TrafficLightConfig() self.cfg.MODEL_PATH = model_path - self.transforms = t.Compose([ - ToTensor(), - ResizeAndPadToSquare([32, 32]), - Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - ]) + self.transforms = t.Compose( + [ + ToTensor(), + ResizeAndPadToSquare([32, 32]), + Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ] + ) self.model = ClassificationModel.load_model(self.cfg) self.model = self.model.to(self.cfg.DEVICE) - self.class_dict = {0: 'Backside', - 1: 'Green', - 2: 'Red', - 3: 'Side', - 4: 'Yellow'} + self.class_dict = {0: "Backside", 1: "Green", 2: "Red", 3: "Side", 4: "Yellow"} def __call__(self, img): """ @@ -68,7 +71,7 @@ def __call__(self, img): # main function for testing purposes -if __name__ == '__main__': +if __name__ == "__main__": args = parse_args() image_path = args.image image = load_image(image_path) diff --git a/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_training.py b/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_training.py index 1322d024..dcf4fe7b 100644 --- a/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_training.py +++ b/code/perception/src/traffic_light_detection/src/traffic_light_detection/traffic_light_training.py @@ -10,12 +10,17 @@ from ruamel.yaml import YAML import sys import os -sys.path.append(os.path.abspath(sys.path[0] + '/..')) -from traffic_light_detection.transforms import Normalize, \ - ResizeAndPadToSquare, load_image # noqa: E402 + +sys.path.append(os.path.abspath(sys.path[0] + "/..")) +from traffic_light_detection.transforms import ( + Normalize, + ResizeAndPadToSquare, + load_image, +) # noqa: E402 from data_generation.weights_organizer import WeightsOrganizer # noqa: E402 -from traffic_light_detection.classification_model import ClassificationModel \ - # noqa: E402 +from traffic_light_detection.classification_model import ( + ClassificationModel, +) # noqa: E402 from traffic_light_config import TrafficLightConfig # noqa: E402 @@ -27,38 +32,50 @@ def __init__(self, cfg): @param cfg: Config file for traffic light classification """ self.cfg = cfg - train_transforms = t.Compose([ - ToTensor(), - ResizeAndPadToSquare([32, 32]), - Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - # ApplyMask(dataset_root + "/mask.png") - ]) - val_transforms = t.Compose([ - ToTensor(), - ResizeAndPadToSquare([32, 32]), - Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) - ]) - self.train_dataset = ImageFolder(root=self.cfg.DATASET_PATH + "/train", - transform=train_transforms, - loader=load_image) - self.train_loader = DataLoader(dataset=self.train_dataset, - batch_size=self.cfg.BATCH_SIZE, - num_workers=self.cfg.NUM_WORKERS, - shuffle=True) - self.val_dataset = ImageFolder(root=self.cfg.DATASET_PATH + "/val", - transform=val_transforms, - loader=load_image) - self.val_loader = DataLoader(dataset=self.val_dataset, - batch_size=self.cfg.BATCH_SIZE, - num_workers=self.cfg.NUM_WORKERS) - self.model = ClassificationModel(num_classes=self.cfg.NUM_CLASSES, - in_channels=self.cfg.NUM_CHANNELS) + train_transforms = t.Compose( + [ + ToTensor(), + ResizeAndPadToSquare([32, 32]), + Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + # ApplyMask(dataset_root + "/mask.png") + ] + ) + val_transforms = t.Compose( + [ + ToTensor(), + ResizeAndPadToSquare([32, 32]), + Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ] + ) + self.train_dataset = ImageFolder( + root=self.cfg.DATASET_PATH + "/train", + transform=train_transforms, + loader=load_image, + ) + self.train_loader = DataLoader( + dataset=self.train_dataset, + batch_size=self.cfg.BATCH_SIZE, + num_workers=self.cfg.NUM_WORKERS, + shuffle=True, + ) + self.val_dataset = ImageFolder( + root=self.cfg.DATASET_PATH + "/val", + transform=val_transforms, + loader=load_image, + ) + self.val_loader = DataLoader( + dataset=self.val_dataset, + batch_size=self.cfg.BATCH_SIZE, + num_workers=self.cfg.NUM_WORKERS, + ) + self.model = ClassificationModel( + num_classes=self.cfg.NUM_CLASSES, in_channels=self.cfg.NUM_CHANNELS + ) self.model = self.model.to(self.cfg.DEVICE) self.optimizer = Adam(self.model.parameters()) self.lr_scheduler = ExponentialLR(self.optimizer, 0.95) self.loss_function = torch.nn.CrossEntropyLoss() - self.weights_organizer = WeightsOrganizer(cfg=self.cfg, - model=self.model) + self.weights_organizer = WeightsOrganizer(cfg=self.cfg, model=self.model) self.live = Live() @@ -76,8 +93,12 @@ def run(self): self.live.log_metric("validation/loss", loss) self.lr_scheduler.step() self.live.next_step() - tepoch.set_postfix(loss=epoch_loss, accuracy=epoch_correct, - val_loss=loss, val_accuracy=correct) + tepoch.set_postfix( + loss=epoch_loss, + accuracy=epoch_correct, + val_loss=loss, + val_accuracy=correct, + ) tepoch.update(1) print(tepoch) self.weights_organizer.save(epoch_correct, correct) @@ -99,14 +120,14 @@ def epoch(self): loss = self.loss_function(outputs, labels) epoch_loss += loss.item() _, predictions = torch.max(outputs.data, 1) - corr = (predictions == labels) + corr = predictions == labels epoch_correct += corr.sum().item() loss.backward() self.optimizer.step() epoch_loss /= len(self.train_dataset) epoch_correct /= len(self.train_dataset) - return epoch_loss, 100. * epoch_correct + return epoch_loss, 100.0 * epoch_correct def validate(self): """ @@ -114,7 +135,7 @@ def validate(self): @return: Average loss and accuracy of the net on the validation-subset """ self.model.eval() - val_loss = 0. + val_loss = 0.0 val_correct = 0 for i, data in enumerate(self.val_loader): images = data[0].to(self.cfg.DEVICE) @@ -125,22 +146,22 @@ def validate(self): loss = self.loss_function(outputs, labels) val_loss += loss.item() _, predictions = torch.max(outputs.data, 1) - corr = (predictions == labels) + corr = predictions == labels val_correct += corr.sum().item() val_loss /= len(self.val_dataset) val_correct /= len(self.val_dataset) - return val_loss, 100. * val_correct + return val_loss, 100.0 * val_correct -if __name__ == '__main__': +if __name__ == "__main__": yaml = YAML(typ="safe") with open("params.yaml") as f: params = yaml.load(f) cfg = TrafficLightConfig() - cfg.EPOCHS = params['train']['epochs'] - cfg.BATCH_SIZE = params['train']['batch_size'] + cfg.EPOCHS = params["train"]["epochs"] + cfg.BATCH_SIZE = params["train"]["batch_size"] print(f"Computation device: {cfg.DEVICE}\n") tr = TrafficLightTraining(cfg) tr.run() diff --git a/code/perception/src/traffic_light_detection/src/traffic_light_detection/transforms.py b/code/perception/src/traffic_light_detection/src/traffic_light_detection/transforms.py index 41dc8bfc..82aba2bc 100644 --- a/code/perception/src/traffic_light_detection/src/traffic_light_detection/transforms.py +++ b/code/perception/src/traffic_light_detection/src/traffic_light_detection/transforms.py @@ -8,7 +8,7 @@ def load_image(path): Loads an image from the given path @rtype: RGB-coded PIL image """ - image = Image.open(path).convert('RGB') + image = Image.open(path).convert("RGB") return image @@ -56,7 +56,7 @@ def __init__(self, path): mask @param path: Path to the mask """ - self.mask = functional.to_tensor(Image.open(path).convert('L')) + self.mask = functional.to_tensor(Image.open(path).convert("L")) def __call__(self, image): mask = torchvision.transforms.Resize(image.shape[1:])(self.mask) diff --git a/code/perception/src/traffic_light_node.py b/code/perception/src/traffic_light_node.py index d7eee4b3..e8216ac1 100755 --- a/code/perception/src/traffic_light_node.py +++ b/code/perception/src/traffic_light_node.py @@ -10,8 +10,9 @@ from perception.msg import TrafficLightState from std_msgs.msg import Int16 from cv_bridge import CvBridge -from traffic_light_detection.src.traffic_light_detection.traffic_light_inference \ - import TrafficLightInference # noqa: E501 +from traffic_light_detection.src.traffic_light_detection.traffic_light_inference import ( + TrafficLightInference, +) # noqa: E501 import cv2 import numpy as np @@ -37,20 +38,19 @@ def setup_camera_subscriptions(self): msg_type=numpy_msg(ImageMsg), callback=self.handle_camera_image, topic=f"/paf/{self.role_name}/{self.side}/segmented_traffic_light", - qos_profile=1 + qos_profile=1, ) def setup_traffic_light_publishers(self): self.traffic_light_publisher = self.new_publisher( msg_type=TrafficLightState, topic=f"/paf/{self.role_name}/{self.side}/traffic_light_state", - qos_profile=1 + qos_profile=1, ) self.traffic_light_distance_publisher = self.new_publisher( msg_type=Int16, - topic=f"/paf/{self.role_name}/{self.side}" + - "/traffic_light_y_distance", - qos_profile=1 + topic=f"/paf/{self.role_name}/{self.side}" + "/traffic_light_y_distance", + qos_profile=1, ) def auto_invalidate_state(self): @@ -74,8 +74,12 @@ def handle_camera_image(self, image): rgb_image = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2RGB) result, data = self.classifier(cv2_image) - if data[0][0] > 1e-15 and data[0][3] > 1e-15 or \ - data[0][0] > 1e-10 or data[0][3] > 1e-10: + if ( + data[0][0] > 1e-15 + and data[0][3] > 1e-15 + or data[0][0] > 1e-10 + or data[0][3] > 1e-10 + ): return # too uncertain, may not be a traffic light if not is_front(rgb_image): @@ -123,8 +127,7 @@ def is_front(image): mask = get_light_mask(image) # Find contours in the thresholded image, use only the largest one - contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, - cv2.CHAIN_APPROX_SIMPLE) + contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = sorted(contours, key=cv2.contourArea, reverse=True)[:1] contour = contours[0] if contours else None diff --git a/code/perception/src/vision_node.py b/code/perception/src/vision_node.py index 66bb4ad9..ef8158aa 100755 --- a/code/perception/src/vision_node.py +++ b/code/perception/src/vision_node.py @@ -3,13 +3,16 @@ from ros_compatibility.node import CompatibleNode import ros_compatibility as roscomp import torch -from torchvision.models.segmentation import DeepLabV3_ResNet101_Weights, \ - deeplabv3_resnet101 -from torchvision.models.detection.faster_rcnn import \ - FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, \ - FasterRCNN_ResNet50_FPN_V2_Weights, \ - fasterrcnn_resnet50_fpn_v2, \ - fasterrcnn_mobilenet_v3_large_320_fpn +from torchvision.models.segmentation import ( + DeepLabV3_ResNet101_Weights, + deeplabv3_resnet101, +) +from torchvision.models.detection.faster_rcnn import ( + FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, + FasterRCNN_ResNet50_FPN_V2_Weights, + fasterrcnn_resnet50_fpn_v2, + fasterrcnn_mobilenet_v3_large_320_fpn, +) import torchvision.transforms as t import cv2 from rospy.numpy_msg import numpy_msg @@ -39,38 +42,41 @@ def __init__(self, name, **kwargs): # dictionary of pretrained models self.model_dict = { - "fasterrcnn_resnet50_fpn_v2": - (fasterrcnn_resnet50_fpn_v2( - weights=FasterRCNN_ResNet50_FPN_V2_Weights.DEFAULT), + "fasterrcnn_resnet50_fpn_v2": ( + fasterrcnn_resnet50_fpn_v2( + weights=FasterRCNN_ResNet50_FPN_V2_Weights.DEFAULT + ), FasterRCNN_ResNet50_FPN_V2_Weights.DEFAULT, "detection", - "pyTorch"), - "fasterrcnn_mobilenet_v3_large_320_fpn": - (fasterrcnn_mobilenet_v3_large_320_fpn( - weights=FasterRCNN_MobileNet_V3_Large_320_FPN_Weights.DEFAULT), + "pyTorch", + ), + "fasterrcnn_mobilenet_v3_large_320_fpn": ( + fasterrcnn_mobilenet_v3_large_320_fpn( + weights=FasterRCNN_MobileNet_V3_Large_320_FPN_Weights.DEFAULT + ), FasterRCNN_MobileNet_V3_Large_320_FPN_Weights.DEFAULT, "detection", - "pyTorch"), - "deeplabv3_resnet101": - (deeplabv3_resnet101( - weights=DeepLabV3_ResNet101_Weights.DEFAULT), + "pyTorch", + ), + "deeplabv3_resnet101": ( + deeplabv3_resnet101(weights=DeepLabV3_ResNet101_Weights.DEFAULT), DeepLabV3_ResNet101_Weights.DEFAULT, "segmentation", - "pyTorch"), - 'yolov8n': (YOLO, "yolov8n.pt", "detection", "ultralytics"), - 'yolov8s': (YOLO, "yolov8s.pt", "detection", "ultralytics"), - 'yolov8m': (YOLO, "yolov8m.pt", "detection", "ultralytics"), - 'yolov8l': (YOLO, "yolov8l.pt", "detection", "ultralytics"), - 'yolov8x': (YOLO, "yolov8x.pt", "detection", "ultralytics"), - 'yolo_nas_l': (NAS, "yolo_nas_l.pt", "detection", "ultralytics"), - 'yolo_nas_m': (NAS, "yolo_nas_m.pt", "detection", "ultralytics"), - 'yolo_nas_s': (NAS, "yolo_nas_s.pt", "detection", "ultralytics"), - 'rtdetr-l': (RTDETR, "rtdetr-l.pt", "detection", "ultralytics"), - 'rtdetr-x': (RTDETR, "rtdetr-x.pt", "detection", "ultralytics"), - 'yolov8x-seg': (YOLO, "yolov8x-seg.pt", "segmentation", - "ultralytics"), - 'sam_l': (SAM, "sam_l.pt", "detection", "ultralytics"), - 'FastSAM-x': (FastSAM, "FastSAM-x.pt", "detection", "ultralytics"), + "pyTorch", + ), + "yolov8n": (YOLO, "yolov8n.pt", "detection", "ultralytics"), + "yolov8s": (YOLO, "yolov8s.pt", "detection", "ultralytics"), + "yolov8m": (YOLO, "yolov8m.pt", "detection", "ultralytics"), + "yolov8l": (YOLO, "yolov8l.pt", "detection", "ultralytics"), + "yolov8x": (YOLO, "yolov8x.pt", "detection", "ultralytics"), + "yolo_nas_l": (NAS, "yolo_nas_l.pt", "detection", "ultralytics"), + "yolo_nas_m": (NAS, "yolo_nas_m.pt", "detection", "ultralytics"), + "yolo_nas_s": (NAS, "yolo_nas_s.pt", "detection", "ultralytics"), + "rtdetr-l": (RTDETR, "rtdetr-l.pt", "detection", "ultralytics"), + "rtdetr-x": (RTDETR, "rtdetr-x.pt", "detection", "ultralytics"), + "yolov8x-seg": (YOLO, "yolov8x-seg.pt", "segmentation", "ultralytics"), + "sam_l": (SAM, "sam_l.pt", "detection", "ultralytics"), + "FastSAM-x": (FastSAM, "FastSAM-x.pt", "detection", "ultralytics"), } # general setup @@ -82,8 +88,7 @@ def __init__(self, name, **kwargs): self.left = self.get_param("left") self.right = self.get_param("right") - self.device = torch.device("cuda" - if torch.cuda.is_available() else "cpu") + self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.depth_images = [] self.dist_arrays = None @@ -142,7 +147,7 @@ def setup_camera_subscriptions(self, side): msg_type=numpy_msg(ImageMsg), callback=self.handle_camera_image, topic=f"/carla/{self.role_name}/{side}/image", - qos_profile=1 + qos_profile=1, ) def setup_dist_array_subscription(self): @@ -154,8 +159,8 @@ def setup_dist_array_subscription(self): self.new_subscription( msg_type=numpy_msg(ImageMsg), callback=self.handle_dist_array, - topic='/paf/hero/Center/dist_array', - qos_profile=1 + topic="/paf/hero/Center/dist_array", + qos_profile=1, ) def setup_camera_publishers(self): @@ -170,25 +175,25 @@ def setup_camera_publishers(self): self.publisher_center = self.new_publisher( msg_type=numpy_msg(ImageMsg), topic=f"/paf/{self.role_name}/Center/segmented_image", - qos_profile=1 + qos_profile=1, ) if self.back: self.publisher_back = self.new_publisher( msg_type=numpy_msg(ImageMsg), topic=f"/paf/{self.role_name}/Back/segmented_image", - qos_profile=1 + qos_profile=1, ) if self.left: self.publisher_left = self.new_publisher( msg_type=numpy_msg(ImageMsg), topic=f"/paf/{self.role_name}/Left/segmented_image", - qos_profile=1 + qos_profile=1, ) if self.right: self.publisher_right = self.new_publisher( msg_type=numpy_msg(ImageMsg), topic=f"/paf/{self.role_name}/Right/segmented_image", - qos_profile=1 + qos_profile=1, ) def setup_object_distance_publishers(self): @@ -200,7 +205,8 @@ def setup_object_distance_publishers(self): self.distance_publisher = self.new_publisher( msg_type=Float32MultiArray, topic=f"/paf/{self.role_name}/{self.side}/object_distance", - qos_profile=1) + qos_profile=1, + ) def setup_traffic_light_publishers(self): """ @@ -210,7 +216,7 @@ def setup_traffic_light_publishers(self): self.traffic_light_publisher = self.new_publisher( msg_type=numpy_msg(ImageMsg), topic=f"/paf/{self.role_name}/{self.side}/segmented_traffic_light", - qos_profile=1 + qos_profile=1, ) def handle_camera_image(self, image): @@ -233,12 +239,11 @@ def handle_camera_image(self, image): vision_result = self.predict_ultralytics(image) # publish vision result to rviz - img_msg = self.bridge.cv2_to_imgmsg(vision_result, - encoding="rgb8") + img_msg = self.bridge.cv2_to_imgmsg(vision_result, encoding="rgb8") img_msg.header = image.header # publish img to corresponding angle topic - side = rospy.resolve_name(img_msg.header.frame_id).split('/')[2] + side = rospy.resolve_name(img_msg.header.frame_id).split("/")[2] if side == "Center": self.publisher_center.publish(img_msg) if side == "Back": @@ -259,9 +264,9 @@ def handle_dist_array(self, dist_array): # callback function for lidar depth image # since frequency is lower than image frequency # the latest lidar image is saved - dist_array = \ - self.bridge.imgmsg_to_cv2(img_msg=dist_array, - desired_encoding='passthrough') + dist_array = self.bridge.imgmsg_to_cv2( + img_msg=dist_array, desired_encoding="passthrough" + ) self.dist_arrays = dist_array def predict_torch(self, image): @@ -282,14 +287,16 @@ def predict_torch(self, image): self.model.eval() # preprocess image - cv_image = self.bridge.imgmsg_to_cv2(img_msg=image, - desired_encoding='passthrough') + cv_image = self.bridge.imgmsg_to_cv2( + img_msg=image, desired_encoding="passthrough" + ) cv_image = cv2.cvtColor(cv_image, cv2.COLOR_RGB2BGR) - preprocess = t.Compose([ - t.ToTensor(), - t.Normalize(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]) - ]) + preprocess = t.Compose( + [ + t.ToTensor(), + t.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), + ] + ) input_image = preprocess(cv_image).unsqueeze(dim=0) # get prediction @@ -297,10 +304,10 @@ def predict_torch(self, image): prediction = self.model(input_image) # apply visualition - if (self.type == "detection"): + if self.type == "detection": vision_result = self.apply_bounding_boxes(cv_image, prediction[0]) - if (self.type == "segmentation"): - vision_result = self.create_mask(cv_image, prediction['out']) + if self.type == "segmentation": + vision_result = self.create_mask(cv_image, prediction["out"]) return vision_result @@ -321,8 +328,9 @@ def predict_ultralytics(self, image): """ # preprocess image - cv_image = self.bridge.imgmsg_to_cv2(img_msg=image, - desired_encoding='passthrough') + cv_image = self.bridge.imgmsg_to_cv2( + img_msg=image, desired_encoding="passthrough" + ) cv_image = cv2.cvtColor(cv_image, cv2.COLOR_RGB2BGR) # run model prediction @@ -347,9 +355,12 @@ def predict_ultralytics(self, image): # crop bounding box area out of depth image distances = np.asarray( - self.dist_arrays[int(pixels[1]):int(pixels[3]):1, - int(pixels[0]):int(pixels[2]):1, - ::]) + self.dist_arrays[ + int(pixels[1]) : int(pixels[3]) : 1, + int(pixels[0]) : int(pixels[2]) : 1, + ::, + ] + ) # set all 0 (black) values to np.inf (necessary if # you want to search for minimum) @@ -376,14 +387,14 @@ def predict_ultralytics(self, image): # copy actual lidar points obj_dist_min_x = self.min_x(dist_array=distances_copy) - obj_dist_min_abs_y = self.min_abs_y( - dist_array=distances_copy) + obj_dist_min_abs_y = self.min_abs_y(dist_array=distances_copy) # absolut distance to object for visualization abs_distance = np.sqrt( - obj_dist_min_x[0]**2 + - obj_dist_min_x[1]**2 + - obj_dist_min_x[2]**2) + obj_dist_min_x[0] ** 2 + + obj_dist_min_x[1] ** 2 + + obj_dist_min_x[2] ** 2 + ) # append class index, min x and min abs y to output array distance_output.append(float(cls)) @@ -399,36 +410,35 @@ def predict_ultralytics(self, image): # add values for visualization c_boxes.append(torch.tensor(pixels)) - c_labels.append(f"Class: {cls}," - f"Meters: {round(abs_distance, 2)}," - f"({round(float(obj_dist_min_x[0]), 2)}," - f"{round(float(obj_dist_min_abs_y[1]), 2)})") + c_labels.append( + f"Class: {cls}," + f"Meters: {round(abs_distance, 2)}," + f"({round(float(obj_dist_min_x[0]), 2)}," + f"{round(float(obj_dist_min_abs_y[1]), 2)})" + ) # publish list of distances of objects for planning - self.distance_publisher.publish( - Float32MultiArray(data=distance_output)) + self.distance_publisher.publish(Float32MultiArray(data=distance_output)) # transform image transposed_image = np.transpose(cv_image, (2, 0, 1)) - image_np_with_detections = torch.tensor(transposed_image, - dtype=torch.uint8) + image_np_with_detections = torch.tensor(transposed_image, dtype=torch.uint8) # proceed with traffic light detection if 9 in output[0].boxes.cls: - asyncio.run(self.process_traffic_lights(output[0], - cv_image, - image.header)) + asyncio.run(self.process_traffic_lights(output[0], cv_image, image.header)) # draw bounding boxes and distance values on image c_boxes = torch.stack(c_boxes) - box = draw_bounding_boxes(image_np_with_detections, - c_boxes, - c_labels, - colors='blue', - width=3, - font_size=12) - np_box_img = np.transpose(box.detach().numpy(), - (1, 2, 0)) + box = draw_bounding_boxes( + image_np_with_detections, + c_boxes, + c_labels, + colors="blue", + width=3, + font_size=12, + ) + np_box_img = np.transpose(box.detach().numpy(), (1, 2, 0)) box_img = cv2.cvtColor(np_box_img, cv2.COLOR_BGR2RGB) return box_img @@ -444,11 +454,8 @@ def min_x(self, dist_array): np.array: 1x3 numpy array of min x lidar point """ - min_x_sorted_indices = np.argsort( - dist_array[:, :, 0], - axis=None) - x, y = np.unravel_index(min_x_sorted_indices[0], - dist_array.shape[:2]) + min_x_sorted_indices = np.argsort(dist_array[:, :, 0], axis=None) + x, y = np.unravel_index(min_x_sorted_indices[0], dist_array.shape[:2]) return dist_array[x][y].copy() def min_abs_y(self, dist_array): @@ -464,11 +471,8 @@ def min_abs_y(self, dist_array): """ abs_distance_copy = np.abs(dist_array.copy()) - min_y_sorted_indices = np.argsort( - abs_distance_copy[:, :, 1], - axis=None) - x, y = np.unravel_index(min_y_sorted_indices[0], - abs_distance_copy.shape[:2]) + min_y_sorted_indices = np.argsort(abs_distance_copy[:, :, 1], axis=None) + x, y = np.unravel_index(min_y_sorted_indices[0], abs_distance_copy.shape[:2]) return dist_array[x][y].copy() # you can add similar functions to support other camera angles here @@ -493,12 +497,11 @@ async def process_traffic_lights(self, prediction, cv_image, image_header): continue box = box[0:4].astype(int) - segmented = cv_image[box[1]:box[3], box[0]:box[2]] + segmented = cv_image[box[1] : box[3], box[0] : box[2]] traffic_light_y_distance = box[1] - traffic_light_image = self.bridge.cv2_to_imgmsg(segmented, - encoding="rgb8") + traffic_light_image = self.bridge.cv2_to_imgmsg(segmented, encoding="rgb8") traffic_light_image.header = image_header traffic_light_image.header.frame_id = str(traffic_light_y_distance) self.traffic_light_publisher.publish(traffic_light_image) @@ -524,9 +527,9 @@ def create_mask(self, input_image, model_output): transposed_image = np.transpose(input_image, (2, 0, 1)) tensor_image = torch.tensor(transposed_image) tensor_image = tensor_image.to(dtype=torch.uint8) - segmented_image = draw_segmentation_masks(tensor_image, - output_predictions, - alpha=0.6) + segmented_image = draw_segmentation_masks( + tensor_image, output_predictions, alpha=0.6 + ) cv_segmented = segmented_image.detach().cpu().numpy() cv_segmented = np.transpose(cv_segmented, (1, 2, 0)) return cv_segmented @@ -544,21 +547,20 @@ def apply_bounding_boxes(self, input_image, model_output): """ transposed_image = np.transpose(input_image, (2, 0, 1)) - image_np_with_detections = torch.tensor(transposed_image, - dtype=torch.uint8) - boxes = model_output['boxes'] - labels = [self.weights.meta["categories"][i] - for i in model_output['labels']] - - box = draw_bounding_boxes(image_np_with_detections, - boxes, - labels, - colors='blue', - width=3, - font_size=24) - - np_box_img = np.transpose(box.detach().numpy(), - (1, 2, 0)) + image_np_with_detections = torch.tensor(transposed_image, dtype=torch.uint8) + boxes = model_output["boxes"] + labels = [self.weights.meta["categories"][i] for i in model_output["labels"]] + + box = draw_bounding_boxes( + image_np_with_detections, + boxes, + labels, + colors="blue", + width=3, + font_size=24, + ) + + np_box_img = np.transpose(box.detach().numpy(), (1, 2, 0)) box_img = cv2.cvtColor(np_box_img, cv2.COLOR_BGR2RGB) return box_img diff --git a/code/planning/setup.py b/code/planning/setup.py index dcc9fc08..d0f57539 100755 --- a/code/planning/setup.py +++ b/code/planning/setup.py @@ -2,6 +2,5 @@ from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup -setup_args = generate_distutils_setup(packages=['planning'], - package_dir={'': 'src'}) +setup_args = generate_distutils_setup(packages=["planning"], package_dir={"": "src"}) setup(**setup_args) diff --git a/code/planning/src/behavior_agent/behavior_tree.py b/code/planning/src/behavior_agent/behavior_tree.py index eda141fe..48791e52 100755 --- a/code/planning/src/behavior_agent/behavior_tree.py +++ b/code/planning/src/behavior_agent/behavior_tree.py @@ -20,76 +20,108 @@ def grow_a_tree(role_name): rules = Parallel( "Rules", children=[ - Selector - ("Priorities", + Selector( + "Priorities", children=[ behaviours.maneuvers.UnstuckRoutine("Unstuck Routine"), - Selector("Road Features", - children=[ - behaviours.maneuvers.LeaveParkingSpace("Leave Parking Space"), - Sequence("Intersection", - children=[ - behaviours.road_features.IntersectionAhead - ("Intersection Ahead?"), - Sequence("Intersection Actions", - children=[ - behaviours.intersection.Approach - ("Approach Intersection"), - behaviours.intersection.Wait - ("Wait Intersection"), - behaviours.intersection.Enter - ("Enter Intersection"), - behaviours.intersection.Leave - ("Leave Intersection") - ]) - ]), - ]), - Selector("Laneswitching", children=[ - Sequence("Laneswitch", + Selector( + "Road Features", + children=[ + behaviours.maneuvers.LeaveParkingSpace( + "Leave Parking Space" + ), + Sequence( + "Intersection", children=[ - behaviours.road_features.LaneChangeAhead - ("Lane Change Ahead?"), - Sequence("Lane Change Actions", - children=[ - behaviours.lane_change.Approach - ("Approach Change"), - behaviours.lane_change.Wait - ("Wait Change"), - behaviours.lane_change.Enter - ("Enter Change"), - behaviours.lane_change.Leave - ("Leave Change") - ]) - ]), - Sequence("Overtaking", + behaviours.road_features.IntersectionAhead( + "Intersection Ahead?" + ), + Sequence( + "Intersection Actions", + children=[ + behaviours.intersection.Approach( + "Approach Intersection" + ), + behaviours.intersection.Wait( + "Wait Intersection" + ), + behaviours.intersection.Enter( + "Enter Intersection" + ), + behaviours.intersection.Leave( + "Leave Intersection" + ), + ], + ), + ], + ), + ], + ), + Selector( + "Laneswitching", + children=[ + Sequence( + "Laneswitch", children=[ - behaviours.road_features.OvertakeAhead - ("Overtake Ahead?"), - Sequence("Overtake Actions", - children=[ - behaviours.overtake.Approach - ("Approach Overtake"), - behaviours.overtake.Wait - ("Wait Overtake"), - behaviours.overtake.Enter - ("Enter Overtake"), - behaviours.overtake.Leave - ("Leave Overtake") - ]) - ]), - - ]), - behaviours.maneuvers.Cruise("Cruise") - ]) - ]) + behaviours.road_features.LaneChangeAhead( + "Lane Change Ahead?" + ), + Sequence( + "Lane Change Actions", + children=[ + behaviours.lane_change.Approach( + "Approach Change" + ), + behaviours.lane_change.Wait("Wait Change"), + behaviours.lane_change.Enter( + "Enter Change" + ), + behaviours.lane_change.Leave( + "Leave Change" + ), + ], + ), + ], + ), + Sequence( + "Overtaking", + children=[ + behaviours.road_features.OvertakeAhead( + "Overtake Ahead?" + ), + Sequence( + "Overtake Actions", + children=[ + behaviours.overtake.Approach( + "Approach Overtake" + ), + behaviours.overtake.Wait("Wait Overtake"), + behaviours.overtake.Enter("Enter Overtake"), + behaviours.overtake.Leave("Leave Overtake"), + ], + ), + ], + ), + ], + ), + behaviours.maneuvers.Cruise("Cruise"), + ], + ) + ], + ) - metarules = Sequence("Meta", children=[behaviours.meta.Start("Start"), rules, - behaviours.meta.End("End")]) - root = Parallel("Root", children=[ - behaviours.topics2blackboard.create_node(role_name), - metarules, - Running("Idle") - ]) + metarules = Sequence( + "Meta", + children=[behaviours.meta.Start("Start"), rules, behaviours.meta.End("End")], + ) + root = Parallel( + "Root", + children=[ + behaviours.topics2blackboard.create_node(role_name), + metarules, + Running("Idle"), + ], + ) return root @@ -101,7 +133,7 @@ def main(): """ Entry point for the demo script. """ - rospy.init_node('behavior_tree', anonymous=True) + rospy.init_node("behavior_tree", anonymous=True) role_name = rospy.get_param("~role_name", "hero") root = grow_a_tree(role_name) behaviour_tree = py_trees_ros.trees.BehaviourTree(root) @@ -119,5 +151,6 @@ def main(): except rospy.ROSInterruptException: pass + if __name__ == "__main__": main() diff --git a/code/planning/src/behavior_agent/behaviours/behavior_speed.py b/code/planning/src/behavior_agent/behaviours/behavior_speed.py index 78edaf54..e5c93dea 100755 --- a/code/planning/src/behavior_agent/behaviours/behavior_speed.py +++ b/code/planning/src/behavior_agent/behaviours/behavior_speed.py @@ -1,4 +1,3 @@ - from collections import namedtuple diff --git a/code/planning/src/behavior_agent/behaviours/intersection.py b/code/planning/src/behavior_agent/behaviours/intersection.py index 48c2e357..6f2ce875 100755 --- a/code/planning/src/behavior_agent/behaviours/intersection.py +++ b/code/planning/src/behavior_agent/behaviours/intersection.py @@ -4,7 +4,7 @@ import rospy -from .import behavior_speed as bs +from . import behavior_speed as bs import planning # noqa: F401 @@ -34,6 +34,7 @@ class Approach(py_trees.behaviour.Behaviour): triggered. It than handles the approaching the intersection, slowing the vehicle down appropriately. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -55,9 +56,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -77,7 +78,7 @@ def initialise(self): self.traffic_light_detected = False self.traffic_light_distance = np.inf - self.traffic_light_status = '' + self.traffic_light_status = "" self.virtual_stopline_distance = np.inf @@ -100,10 +101,10 @@ def update(self): detected. """ # Update Light Info - light_status_msg = self.blackboard.get( - "/paf/hero/Center/traffic_light_state") + light_status_msg = self.blackboard.get("/paf/hero/Center/traffic_light_state") light_distance_y_msg = self.blackboard.get( - "/paf/hero/Center/traffic_light_y_distance") + "/paf/hero/Center/traffic_light_y_distance" + ) if light_status_msg is not None: self.traffic_light_status = get_color(light_status_msg.state) self.traffic_light_detected = True @@ -133,18 +134,19 @@ def update(self): target_distance = TARGET_DISTANCE_TO_STOP # stop when there is no or red/yellow traffic light or a stop sign is # detected - if self.traffic_light_status == '' \ - or self.traffic_light_status == 'red' \ - or self.traffic_light_status == 'yellow'\ - or (self.stop_sign_detected and - not self.traffic_light_detected): + if ( + self.traffic_light_status == "" + or self.traffic_light_status == "red" + or self.traffic_light_status == "yellow" + or (self.stop_sign_detected and not self.traffic_light_detected) + ): rospy.loginfo("slowing down!") self.curr_behavior_pub.publish(bs.int_app_to_stop.name) # approach slowly when traffic light is green as traffic lights are # higher priority than traffic signs this behavior is desired - if self.traffic_light_status == 'green': + if self.traffic_light_status == "green": self.curr_behavior_pub.publish(bs.int_app_green.name) # get speed @@ -154,30 +156,35 @@ def update(self): else: rospy.logwarn("no speedometer connected") return py_trees.common.Status.RUNNING - if (self.virtual_stopline_distance > target_distance) and \ - (self.traffic_light_distance > 150): + if (self.virtual_stopline_distance > target_distance) and ( + self.traffic_light_distance > 150 + ): # too far print("still approaching") return py_trees.common.Status.RUNNING - elif speed < convert_to_ms(2.0) and \ - ((self.virtual_stopline_distance < target_distance) or - (self.traffic_light_distance < 150)): + elif speed < convert_to_ms(2.0) and ( + (self.virtual_stopline_distance < target_distance) + or (self.traffic_light_distance < 150) + ): # stopped print("stopped") return py_trees.common.Status.SUCCESS - elif speed > convert_to_ms(5.0) and \ - self.virtual_stopline_distance < 6.0 and \ - self.traffic_light_status == "green": + elif ( + speed > convert_to_ms(5.0) + and self.virtual_stopline_distance < 6.0 + and self.traffic_light_status == "green" + ): # drive through intersection even if traffic light turns yellow return py_trees.common.Status.SUCCESS - elif speed > convert_to_ms(5.0) and \ - self.virtual_stopline_distance < 3.5: + elif speed > convert_to_ms(5.0) and self.virtual_stopline_distance < 3.5: # running over line return py_trees.common.Status.SUCCESS - if self.virtual_stopline_distance < target_distance and \ - not self.stopline_detected: + if ( + self.virtual_stopline_distance < target_distance + and not self.stopline_detected + ): rospy.loginfo("Leave intersection!") return py_trees.common.Status.SUCCESS else: @@ -194,9 +201,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Wait(py_trees.behaviour.Behaviour): @@ -205,6 +212,7 @@ class Wait(py_trees.behaviour.Behaviour): section until there either is no traffic light, the traffic light is green or the intersection is clear. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -225,9 +233,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() self.red_light_flag = False self.green_light_time = None @@ -262,8 +270,7 @@ def update(self): py_trees.common.Status.SUCCESS, if the traffic light switched to green or no traffic light is detected """ - light_status_msg = self.blackboard.get( - "/paf/hero/Center/traffic_light_state") + light_status_msg = self.blackboard.get("/paf/hero/Center/traffic_light_state") # ADD FEATURE: Check if intersection is clear lidar_data = None @@ -277,34 +284,36 @@ def update(self): if light_status_msg is not None: traffic_light_status = get_color(light_status_msg.state) - if traffic_light_status == "red" or \ - traffic_light_status == "yellow": + if traffic_light_status == "red" or traffic_light_status == "yellow": # Wait at traffic light self.red_light_flag = True self.green_light_time = rospy.get_rostime() rospy.loginfo(f"Light Status: {traffic_light_status}") self.curr_behavior_pub.publish(bs.int_wait.name) return py_trees.common.Status.RUNNING - elif rospy.get_rostime() - self.green_light_time < \ - rospy.Duration(1)\ - and traffic_light_status == "green": + elif ( + rospy.get_rostime() - self.green_light_time < rospy.Duration(1) + and traffic_light_status == "green" + ): # Wait approx 1s for confirmation rospy.loginfo("Confirm green light!") return py_trees.common.Status.RUNNING elif self.red_light_flag and traffic_light_status != "green": - rospy.loginfo(f"Light Status: {traffic_light_status}" - "-> prev was red") + rospy.loginfo(f"Light Status: {traffic_light_status}" "-> prev was red") # Probably some interference return py_trees.common.Status.RUNNING - elif rospy.get_rostime() - self.green_light_time > \ - rospy.Duration(1)\ - and traffic_light_status == "green": + elif ( + rospy.get_rostime() - self.green_light_time > rospy.Duration(1) + and traffic_light_status == "green" + ): rospy.loginfo(f"Light Status: {traffic_light_status}") # Drive through intersection return py_trees.common.Status.SUCCESS else: - rospy.loginfo(f"Light Status: {traffic_light_status}" - "-> No Traffic Light detected") + rospy.loginfo( + f"Light Status: {traffic_light_status}" + "-> No Traffic Light detected" + ) # Check clear if no traffic light is detected if not intersection_clear: @@ -326,9 +335,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Enter(py_trees.behaviour.Behaviour): @@ -337,6 +346,7 @@ class Enter(py_trees.behaviour.Behaviour): sets a speed and finishes if the ego vehicle is close to the end of the intersection. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -357,9 +367,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -416,9 +426,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Leave(py_trees.behaviour.Behaviour): @@ -426,6 +436,7 @@ class Leave(py_trees.behaviour.Behaviour): This behaviour defines the leaf of this subtree, if this behavior is reached, the vehicle left the intersection. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -446,9 +457,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -489,6 +500,6 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/lane_change.py b/code/planning/src/behavior_agent/behaviours/lane_change.py index 9a6655b8..cf63419d 100755 --- a/code/planning/src/behavior_agent/behaviours/lane_change.py +++ b/code/planning/src/behavior_agent/behaviours/lane_change.py @@ -19,6 +19,7 @@ class Approach(py_trees.behaviour.Behaviour): triggered. It than handles approaching the lane change, slowing the vehicle down appropriately. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -40,9 +41,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -116,13 +117,19 @@ def update(self): rospy.loginfo("still approaching") self.curr_behavior_pub.publish(bs.lc_app_blocked.name) return py_trees.common.Status.RUNNING - elif speed < convert_to_ms(2.0) and \ - self.virtual_change_distance < target_dis and self.blocked: + elif ( + speed < convert_to_ms(2.0) + and self.virtual_change_distance < target_dis + and self.blocked + ): # stopped rospy.loginfo("stopped") return py_trees.common.Status.SUCCESS - elif speed > convert_to_ms(5.0) and \ - self.virtual_change_distance < 3.5 and not self.blocked: + elif ( + speed > convert_to_ms(5.0) + and self.virtual_change_distance < 3.5 + and not self.blocked + ): # running over line return py_trees.common.Status.SUCCESS else: @@ -139,15 +146,16 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Wait(py_trees.behaviour.Behaviour): """ This behavior handles the waiting in front of the lane change. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -168,9 +176,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -241,9 +249,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Enter(py_trees.behaviour.Behaviour): @@ -252,6 +260,7 @@ class Enter(py_trees.behaviour.Behaviour): sets a speed and finishes if the ego vehicle is close to the end of the intersection. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -272,9 +281,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -307,8 +316,7 @@ def update(self): py_trees.common.Status.FAILURE, if no next path point can be detected. """ - next_waypoint_msg = self.blackboard.\ - get("/paf/hero/lane_change_distance") + next_waypoint_msg = self.blackboard.get("/paf/hero/lane_change_distance") if next_waypoint_msg is None: return py_trees.common.Status.FAILURE @@ -329,9 +337,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Leave(py_trees.behaviour.Behaviour): @@ -339,6 +347,7 @@ class Leave(py_trees.behaviour.Behaviour): This behaviour defines the leaf of this subtree, if this behavior is reached, the vehicle left the intersection. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -359,9 +368,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -404,6 +413,6 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/maneuvers.py b/code/planning/src/behavior_agent/behaviours/maneuvers.py index 2a37f7a1..913fa0c0 100755 --- a/code/planning/src/behavior_agent/behaviours/maneuvers.py +++ b/code/planning/src/behavior_agent/behaviours/maneuvers.py @@ -3,6 +3,7 @@ from std_msgs.msg import String, Float32, Bool import numpy as np from . import behavior_speed as bs + # from behavior_agent.msg import BehaviorSpeed """ @@ -15,6 +16,7 @@ class LeaveParkingSpace(py_trees.behaviour.Behaviour): This behavior is triggered in the beginning when the vehicle needs to leave the parking space. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -40,9 +42,9 @@ def setup(self, timeout): successful :return: True, as there is nothing to set up. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() self.initPosition = None return True @@ -90,13 +92,20 @@ def update(self): speed = self.blackboard.get("/carla/hero/Speed") if self.called is False: # calculate distance between start and current position - if position is not None and \ - self.initPosition is not None and \ - speed is not None: - startPos = np.array([position.pose.position.x, - position.pose.position.y]) - endPos = np.array([self.initPosition.pose.position.x, - self.initPosition.pose.position.y]) + if ( + position is not None + and self.initPosition is not None + and speed is not None + ): + startPos = np.array( + [position.pose.position.x, position.pose.position.y] + ) + endPos = np.array( + [ + self.initPosition.pose.position.x, + self.initPosition.pose.position.y, + ] + ) distance = np.linalg.norm(startPos - endPos) if distance < 1 or speed.speed < 2: self.curr_behavior_pub.publish(bs.parking.name) @@ -121,8 +130,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class SwitchLaneLeft(py_trees.behaviour.Behaviour): @@ -131,6 +142,7 @@ class SwitchLaneLeft(py_trees.behaviour.Behaviour): switch to the lane to the left. A check if the lane is free might be added in the future. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -205,8 +217,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class SwitchLaneRight(py_trees.behaviour.Behaviour): @@ -215,6 +229,7 @@ class SwitchLaneRight(py_trees.behaviour.Behaviour): switch to the lane to the right. A check if the lane is free might be added in the future. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -281,8 +296,10 @@ def update(self): return py_trees.common.Status.RUNNING def terminate(self, new_status): - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Cruise(py_trees.behaviour.Behaviour): @@ -296,6 +313,7 @@ class Cruise(py_trees.behaviour.Behaviour): speed control = acting via speed limits and target_speed following the trajectory = acting """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -320,9 +338,9 @@ def setup(self, timeout): :return: True, as there is nothing to set up. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -369,8 +387,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) def get_distance(pos_1, pos_2): @@ -402,7 +422,6 @@ def pos_to_np_array(pos): class UnstuckRoutine(py_trees.behaviour.Behaviour): - """ Documentation to this behavior can be found in /doc/planning/Behavior_detailed.md @@ -411,6 +430,7 @@ class UnstuckRoutine(py_trees.behaviour.Behaviour): unstuck. The behavior will then try to reverse and steer to the left or right to get out of the stuck situation. """ + def reset_stuck_values(self): self.unstuck_overtake_count = 0 self.stuck_timer = rospy.Time.now() @@ -421,16 +441,20 @@ def print_warnings(self): self.last_stuck_duration_log = self.stuck_duration self.last_wait_stuck_duration_log = self.wait_stuck_duration - stuck_duration_diff = (self.stuck_duration - - self.last_stuck_duration_log) - wait_stuck_duration_diff = (self.wait_stuck_duration - - self.last_wait_stuck_duration_log) + stuck_duration_diff = self.stuck_duration - self.last_stuck_duration_log + wait_stuck_duration_diff = ( + self.wait_stuck_duration - self.last_wait_stuck_duration_log + ) - if self.stuck_duration.secs > TRIGGER_STUCK_DURATION.secs/2 \ - and stuck_duration_diff.secs >= 1: + if ( + self.stuck_duration.secs > TRIGGER_STUCK_DURATION.secs / 2 + and stuck_duration_diff.secs >= 1 + ): rospy.logwarn(f"Stuck for {self.stuck_duration.secs} s") - if self.wait_stuck_duration.secs > TRIGGER_WAIT_STUCK_DURATION.secs/2\ - and wait_stuck_duration_diff.secs >= 1: + if ( + self.wait_stuck_duration.secs > TRIGGER_WAIT_STUCK_DURATION.secs / 2 + and wait_stuck_duration_diff.secs >= 1 + ): rospy.logwarn(f"Wait stuck for {self.wait_stuck_duration.secs} s") def __init__(self, name): @@ -469,15 +493,15 @@ def setup(self, timeout): successful :return: True, as there is nothing to set up. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) - self.pub_unstuck_distance = rospy.Publisher("/paf/hero/" - "unstuck_distance", - Float32, queue_size=1) - self.pub_unstuck_flag = rospy.Publisher("/paf/hero/" - "unstuck_flag", - Bool, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) + self.pub_unstuck_distance = rospy.Publisher( + "/paf/hero/" "unstuck_distance", Float32, queue_size=1 + ) + self.pub_unstuck_flag = rospy.Publisher( + "/paf/hero/" "unstuck_flag", Bool, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -527,18 +551,24 @@ def initialise(self): # print fatal error if stuck for too long if self.stuck_duration >= TRIGGER_STUCK_DURATION: - rospy.logfatal(f"""Should be Driving but Stuck in one place + rospy.logfatal( + f"""Should be Driving but Stuck in one place for more than {TRIGGER_STUCK_DURATION.secs}\n - -> starting unstuck routine""") + -> starting unstuck routine""" + ) self.init_pos = pos_to_np_array( - self.blackboard.get("/paf/hero/current_pos")) + self.blackboard.get("/paf/hero/current_pos") + ) elif self.wait_stuck_duration >= TRIGGER_WAIT_STUCK_DURATION: - rospy.logfatal(f"""Wait Stuck in one place + rospy.logfatal( + f"""Wait Stuck in one place for more than {TRIGGER_WAIT_STUCK_DURATION.secs} \n - -> starting unstuck routine""") + -> starting unstuck routine""" + ) self.init_pos = pos_to_np_array( - self.blackboard.get("/paf/hero/current_pos")) + self.blackboard.get("/paf/hero/current_pos") + ) return True @@ -563,15 +593,16 @@ def update(self): # self.stuck_timer = rospy.Time.now() # self.wait_stuck_timer = rospy.Time.now() - self.current_pos = pos_to_np_array( - self.blackboard.get("/paf/hero/current_pos")) + self.current_pos = pos_to_np_array(self.blackboard.get("/paf/hero/current_pos")) self.current_speed = self.blackboard.get("/carla/hero/Speed") if self.init_pos is None or self.current_pos is None: return py_trees.common.Status.FAILURE # if no stuck detected, return failure - if self.stuck_duration < TRIGGER_STUCK_DURATION and \ - self.wait_stuck_duration < TRIGGER_WAIT_STUCK_DURATION: + if ( + self.stuck_duration < TRIGGER_STUCK_DURATION + and self.wait_stuck_duration < TRIGGER_WAIT_STUCK_DURATION + ): # rospy.logfatal("No stuck detected.") self.pub_unstuck_flag.publish(False) # unstuck distance -1 is set, to reset the unstuck distance @@ -579,7 +610,7 @@ def update(self): return py_trees.common.Status.FAILURE # stuck detected -> unstuck routine - if rospy.Time.now()-self.init_ros_stuck_time < UNSTUCK_DRIVE_DURATION: + if rospy.Time.now() - self.init_ros_stuck_time < UNSTUCK_DRIVE_DURATION: self.curr_behavior_pub.publish(bs.us_unstuck.name) self.pub_unstuck_flag.publish(True) rospy.logfatal("Unstuck routine running.") @@ -590,21 +621,20 @@ def update(self): self.curr_behavior_pub.publish(bs.us_stop.name) return py_trees.common.Status.RUNNING # vehicle has stopped: - unstuck_distance = get_distance(self.init_pos, - self.current_pos) + unstuck_distance = get_distance(self.init_pos, self.current_pos) self.pub_unstuck_distance.publish(unstuck_distance) # check if vehicle needs to overtake: # save current pos to last_unstuck_positions - self.last_unstuck_positions = np.roll(self.last_unstuck_positions, - -1, axis=0) + self.last_unstuck_positions = np.roll( + self.last_unstuck_positions, -1, axis=0 + ) self.last_unstuck_positions[-1] = self.init_pos # if last unstuck was too far away, no overtake # we only want to overtake when we tried to unstuck twice # this case is the first time ever we tried to unstuck - if np.array_equal(self.last_unstuck_positions[0], - np.array([0, 0])): + if np.array_equal(self.last_unstuck_positions[0], np.array([0, 0])): self.reset_stuck_values() rospy.logwarn("Unstuck routine finished.") return py_trees.common.Status.FAILURE @@ -614,9 +644,12 @@ def update(self): # if the distance between the last and the first unstuck position # is too far, we don't want to overtake, since its the first # unstuck routine at this position on the map - if get_distance(self.last_unstuck_positions[0], - self.last_unstuck_positions[-1])\ - > UNSTUCK_CLEAR_DISTANCE: + if ( + get_distance( + self.last_unstuck_positions[0], self.last_unstuck_positions[-1] + ) + > UNSTUCK_CLEAR_DISTANCE + ): self.reset_stuck_values() rospy.logwarn("Unstuck routine finished.") return py_trees.common.Status.FAILURE @@ -646,5 +679,7 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/meta.py b/code/planning/src/behavior_agent/behaviours/meta.py index c3921d1a..2461495c 100755 --- a/code/planning/src/behavior_agent/behaviours/meta.py +++ b/code/planning/src/behavior_agent/behaviours/meta.py @@ -14,6 +14,7 @@ class Start(py_trees.behaviour.Behaviour): This behavior is the first one being called when the decision tree starts, it sets a first target_speed """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -35,9 +36,9 @@ def setup(self, timeout): :return: True, as the set up is successful. """ self.blackboard = py_trees.blackboard.Blackboard() - self.target_speed_pub = rospy.Publisher("paf/hero/" - "max_velocity", - Float32, queue_size=1) + self.target_speed_pub = rospy.Publisher( + "paf/hero/" "max_velocity", Float32, queue_size=1 + ) return True def initialise(self): @@ -76,14 +77,17 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates :param new_status: new state after this one is terminated """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class End(py_trees.behaviour.Behaviour): """ This behavior is called as the last one when the agent finished the path. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -105,9 +109,9 @@ def setup(self, timeout): :return: True, as the set up is successful. """ self.blackboard = py_trees.blackboard.Blackboard() - self.target_speed_pub = rospy.Publisher("/paf/hero/" - "max_velocity", - Float32, queue_size=1) + self.target_speed_pub = rospy.Publisher( + "/paf/hero/" "max_velocity", Float32, queue_size=1 + ) return True def initialise(self): @@ -150,5 +154,7 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates :param new_status: new state after this one is terminated """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/overtake.py b/code/planning/src/behavior_agent/behaviours/overtake.py index 5b1bb7b4..affcbc61 100644 --- a/code/planning/src/behavior_agent/behaviours/overtake.py +++ b/code/planning/src/behavior_agent/behaviours/overtake.py @@ -6,8 +6,7 @@ from . import behavior_speed as bs import planning # noqa: F401 -from local_planner.utils import NUM_WAYPOINTS, TARGET_DISTANCE_TO_STOP, \ - convert_to_ms +from local_planner.utils import NUM_WAYPOINTS, TARGET_DISTANCE_TO_STOP, convert_to_ms """ Source: https://github.com/ll7/psaf2 @@ -25,6 +24,7 @@ class Approach(py_trees.behaviour.Behaviour): behaviours.road_features.overtake_ahead is triggered. It than handles the procedure for overtaking. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -46,9 +46,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", - String, queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -103,8 +103,10 @@ def update(self): else: distance_oncoming = 35 - if distance_oncoming is not None and \ - distance_oncoming > self.clear_distance: + if ( + distance_oncoming is not None + and distance_oncoming > self.clear_distance + ): rospy.loginfo("Overtake is free not slowing down!") self.curr_behavior_pub.publish(bs.ot_app_free.name) return py_trees.common.Status.SUCCESS @@ -124,8 +126,7 @@ def update(self): # too far rospy.loginfo("still approaching") return py_trees.common.Status.RUNNING - elif speed < convert_to_ms(2.0) and \ - self.ot_distance < TARGET_DISTANCE_TO_STOP: + elif speed < convert_to_ms(2.0) and self.ot_distance < TARGET_DISTANCE_TO_STOP: # stopped rospy.loginfo("stopped") return py_trees.common.Status.SUCCESS @@ -144,9 +145,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Wait(py_trees.behaviour.Behaviour): @@ -155,6 +156,7 @@ class Wait(py_trees.behaviour.Behaviour): which is blocking the road. The Ego vehicle is waiting to get a clear path for overtaking. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -175,9 +177,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -251,9 +253,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Enter(py_trees.behaviour.Behaviour): @@ -261,6 +263,7 @@ class Enter(py_trees.behaviour.Behaviour): This behavior handles the switching to a new lane in the overtaking procedure. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -281,9 +284,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -342,9 +345,9 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class Leave(py_trees.behaviour.Behaviour): @@ -352,6 +355,7 @@ class Leave(py_trees.behaviour.Behaviour): This behaviour defines the leaf of this subtree, if this behavior is reached, the vehicle peformed the overtake. """ + def __init__(self, name): """ Minimal one-time initialisation. Other one-time initialisation @@ -372,9 +376,9 @@ def setup(self, timeout): successful :return: True, as the set up is successful. """ - self.curr_behavior_pub = rospy.Publisher("/paf/hero/" - "curr_behavior", String, - queue_size=1) + self.curr_behavior_pub = rospy.Publisher( + "/paf/hero/" "curr_behavior", String, queue_size=1 + ) self.blackboard = py_trees.blackboard.Blackboard() return True @@ -389,8 +393,7 @@ def initialise(self): """ self.curr_behavior_pub.publish(bs.ot_leave.name) data = self.blackboard.get("/paf/hero/current_pos") - self.first_pos = np.array([data.pose.position.x, - data.pose.position.y]) + self.first_pos = np.array([data.pose.position.x, data.pose.position.y]) rospy.loginfo(f"Leave Overtake: {self.first_pos}") return True @@ -407,8 +410,7 @@ def update(self): """ global OVERTAKE_EXECUTING data = self.blackboard.get("/paf/hero/current_pos") - self.current_pos = np.array([data.pose.position.x, - data.pose.position.y]) + self.current_pos = np.array([data.pose.position.x, data.pose.position.y]) distance = np.linalg.norm(self.first_pos - self.current_pos) if distance > OVERTAKE_EXECUTING + NUM_WAYPOINTS: rospy.loginfo(f"Left Overtake: {self.current_pos}") @@ -427,6 +429,6 @@ def terminate(self, new_status): :param new_status: new state after this one is terminated """ self.logger.debug( - " %s [Foo::terminate().terminate()][%s->%s]" % (self.name, - self.status, - new_status)) + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/road_features.py b/code/planning/src/behavior_agent/behaviours/road_features.py index 67a70a36..13f00caf 100755 --- a/code/planning/src/behavior_agent/behaviours/road_features.py +++ b/code/planning/src/behavior_agent/behaviours/road_features.py @@ -17,6 +17,7 @@ class IntersectionAhead(py_trees.behaviour.Behaviour): ego vehicle or not and triggers the rest of the decision tree handling the intersection. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -91,8 +92,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates :param new_status: new state after this one is terminated """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class LaneChangeAhead(py_trees.behaviour.Behaviour): @@ -101,6 +104,7 @@ class LaneChangeAhead(py_trees.behaviour.Behaviour): ego vehicle or not and triggers the rest of the decision tree handling the lane change. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -174,8 +178,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates :param new_status: new state after this one is terminated """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class OvertakeAhead(py_trees.behaviour.Behaviour): @@ -183,6 +189,7 @@ class OvertakeAhead(py_trees.behaviour.Behaviour): This behaviour checks whether an object that needs to be overtaken is ahead """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -241,13 +248,13 @@ def update(self): current_position = self.blackboard.get("/paf/hero/current_pos") current_heading = self.blackboard.get("/paf/hero/current_heading").data - if obstacle_msg is None or \ - current_position is None or \ - current_heading is None: + if obstacle_msg is None or current_position is None or current_heading is None: return py_trees.common.Status.FAILURE - current_position = [current_position.pose.position.x, - current_position.pose.position.y, - current_position.pose.position.z] + current_position = [ + current_position.pose.position.x, + current_position.pose.position.y, + current_position.pose.position.z, + ] obstacle_distance = obstacle_msg.data[0] obstacle_speed = obstacle_msg.data[1] @@ -255,11 +262,12 @@ def update(self): if obstacle_distance == np.Inf: return py_trees.common.Status.FAILURE # calculate approx collision position in global coords - rotation_matrix = Rotation.from_euler('z', current_heading) + rotation_matrix = Rotation.from_euler("z", current_heading) # Apply current heading to absolute distance vector # and add to current position pos_moved_in_x_direction = current_position + rotation_matrix.apply( - np.array([obstacle_distance, 0, 0])) + np.array([obstacle_distance, 0, 0]) + ) if np.linalg.norm(pos_moved_in_x_direction - current_position) < 1: # current collision is not near trajectory lane @@ -285,8 +293,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates :param new_status: new state after this one is terminated """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class MultiLane(py_trees.behaviour.Behaviour): @@ -295,6 +305,7 @@ class MultiLane(py_trees.behaviour.Behaviour): one lane in the driving direction. This could be used to change lanes to the right to perhaps evade an emergency vehicle. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -367,8 +378,10 @@ def terminate(self, new_status): down writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class SingleLineDotted(py_trees.behaviour.Behaviour): @@ -376,6 +389,7 @@ class SingleLineDotted(py_trees.behaviour.Behaviour): This behavior checks if it is allowed to switch lanes one a single lane street. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -445,8 +459,10 @@ def terminate(self, new_status): down writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class RightLaneAvailable(py_trees.behaviour.Behaviour): @@ -454,6 +470,7 @@ class RightLaneAvailable(py_trees.behaviour.Behaviour): This behavior checks if there is a lane to the right of the agent it could change to. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -522,8 +539,10 @@ def terminate(self, new_status): down writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class LeftLaneAvailable(py_trees.behaviour.Behaviour): @@ -531,6 +550,7 @@ class LeftLaneAvailable(py_trees.behaviour.Behaviour): On a multi-lane, this behavior checks if there is a lane to the left of the agent it could change to, to overtake. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -602,5 +622,7 @@ def terminate(self, new_status): down writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s ]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s ]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/behavior_agent/behaviours/topics2blackboard.py b/code/planning/src/behavior_agent/behaviours/topics2blackboard.py index 46616694..5f80061b 100755 --- a/code/planning/src/behavior_agent/behaviours/topics2blackboard.py +++ b/code/planning/src/behavior_agent/behaviours/topics2blackboard.py @@ -24,50 +24,92 @@ def create_node(role_name): :return: topics2blackboard the subtree of the topics in the blackboard """ topics = [ - {'name': f"/carla/{role_name}/Speed", 'msg': CarlaSpeedometer, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/slowed_by_car_in_front", 'msg': Bool, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/waypoint_distance", 'msg': Waypoint, - 'clearing-policy': py_trees.common.ClearingPolicy.ON_INITIALISE}, - {'name': f"/paf/{role_name}/stop_sign", 'msg': Stop_sign, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': - f"/paf/{role_name}/Center/traffic_light_state", - 'msg': TrafficLightState, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': - f"/paf/{role_name}/Center/traffic_light_y_distance", - 'msg': Int16, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/max_velocity", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/speed_limit", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/lane_change_distance", 'msg': LaneChange, - 'clearing-policy': py_trees.common.ClearingPolicy.ON_INITIALISE}, - {'name': f"/paf/{role_name}/collision", 'msg': Float32MultiArray, - 'clearing-policy': py_trees.common.ClearingPolicy.ON_INITIALISE}, - {'name': f"/paf/{role_name}/current_pos", 'msg': PoseStamped, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/current_heading", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/overtake_success", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/oncoming", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER}, - {'name': f"/paf/{role_name}/target_velocity", 'msg': Float32, - 'clearing-policy': py_trees.common.ClearingPolicy.NEVER} + { + "name": f"/carla/{role_name}/Speed", + "msg": CarlaSpeedometer, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/slowed_by_car_in_front", + "msg": Bool, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/waypoint_distance", + "msg": Waypoint, + "clearing-policy": py_trees.common.ClearingPolicy.ON_INITIALISE, + }, + { + "name": f"/paf/{role_name}/stop_sign", + "msg": Stop_sign, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/Center/traffic_light_state", + "msg": TrafficLightState, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/Center/traffic_light_y_distance", + "msg": Int16, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/max_velocity", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/speed_limit", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/lane_change_distance", + "msg": LaneChange, + "clearing-policy": py_trees.common.ClearingPolicy.ON_INITIALISE, + }, + { + "name": f"/paf/{role_name}/collision", + "msg": Float32MultiArray, + "clearing-policy": py_trees.common.ClearingPolicy.ON_INITIALISE, + }, + { + "name": f"/paf/{role_name}/current_pos", + "msg": PoseStamped, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/current_heading", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/overtake_success", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/oncoming", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, + { + "name": f"/paf/{role_name}/target_velocity", + "msg": Float32, + "clearing-policy": py_trees.common.ClearingPolicy.NEVER, + }, ] topics2blackboard = py_trees.composites.Parallel("Topics to Blackboard") for topic in topics: topics2blackboard.add_child( - py_trees_ros. - subscribers. - ToBlackboard(name=topic['name'], - topic_name=topic['name'], - topic_type=topic['msg'], - blackboard_variables={topic['name']: None}, - clearing_policy=topic['clearing-policy'])) + py_trees_ros.subscribers.ToBlackboard( + name=topic["name"], + topic_name=topic["name"], + topic_type=topic["msg"], + blackboard_variables={topic["name"]: None}, + clearing_policy=topic["clearing-policy"], + ) + ) return topics2blackboard diff --git a/code/planning/src/behavior_agent/behaviours/traffic_objects.py b/code/planning/src/behavior_agent/behaviours/traffic_objects.py index 547ebce9..d15efb64 100755 --- a/code/planning/src/behavior_agent/behaviours/traffic_objects.py +++ b/code/planning/src/behavior_agent/behaviours/traffic_objects.py @@ -14,6 +14,7 @@ class NotSlowedByCarInFront(py_trees.behaviour.Behaviour): More cases could be added later on. This behavior should be triggered by the perception. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -87,14 +88,17 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class WaitLeftLaneFree(py_trees.behaviour.Behaviour): """ This behavior checks if it is safe to change to the lane on the left. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -173,14 +177,17 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class WaitRightLaneFree(py_trees.behaviour.Behaviour): """ This behavior checks if it is safe to change to the lane on the left. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -257,8 +264,10 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class NotSlowedByCarInFrontRight(py_trees.behaviour.Behaviour): @@ -266,6 +275,7 @@ class NotSlowedByCarInFrontRight(py_trees.behaviour.Behaviour): Checks if there is a car on the lane to the right that would slow the agent down. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -336,14 +346,17 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) class OvertakingPossible(py_trees.behaviour.Behaviour): """ Checks if the overtaking is possible. """ + def __init__(self, name): """ Minimal one-time initialisation. A good rule of thumb is to only @@ -414,5 +427,7 @@ def terminate(self, new_status): writes a status message to the console when the behaviour terminates """ - self.logger.debug(" %s [Foo::terminate().terminate()][%s->%s]" % - (self.name, self.status, new_status)) + self.logger.debug( + " %s [Foo::terminate().terminate()][%s->%s]" + % (self.name, self.status, new_status) + ) diff --git a/code/planning/src/global_planner/dev_global_route.py b/code/planning/src/global_planner/dev_global_route.py index 1d455ac7..2f01fdf4 100755 --- a/code/planning/src/global_planner/dev_global_route.py +++ b/code/planning/src/global_planner/dev_global_route.py @@ -25,37 +25,38 @@ class DevGlobalRoute(CompatibleNode): def __init__(self): - super(DevGlobalRoute, self).__init__('DevGlobalRoute') + super(DevGlobalRoute, self).__init__("DevGlobalRoute") self.role_name = self.get_param("role_name", "hero") self.from_txt = self.get_param("from_txt", True) if self.from_txt: self.global_route_txt = self.get_param( - 'global_route_txt', - "/code/planning/src/global_planner/global_route.txt") + "global_route_txt", "/code/planning/src/global_planner/global_route.txt" + ) else: - self.sampling_resolution = self.get_param('sampling_resolution', - 100.0) + self.sampling_resolution = self.get_param("sampling_resolution", 100.0) # consecutively increasing sequence ID for header_msg self.seq = 0 self.routes = self.get_param( - 'routes', "/opt/leaderboard/data/routes_devtest.xml") + "routes", "/opt/leaderboard/data/routes_devtest.xml" + ) self.map_sub = self.new_subscription( msg_type=CarlaWorldInfo, topic="/carla/world_info", callback=self.world_info_callback, - qos_profile=10) + qos_profile=10, + ) self.global_plan_pub = self.new_publisher( msg_type=CarlaRoute, - topic='/carla/' + self.role_name + '/global_plan', + topic="/carla/" + self.role_name + "/global_plan", qos_profile=QoSProfile( - depth=1, - durability=DurabilityPolicy.TRANSIENT_LOCAL) + depth=1, durability=DurabilityPolicy.TRANSIENT_LOCAL + ), ) - self.logerr('DevGlobalRoute-Node started') + self.logerr("DevGlobalRoute-Node started") def world_info_callback(self, data: CarlaWorldInfo) -> None: """ @@ -69,8 +70,10 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: with open(f"/workspace{self.global_route_txt}", "r") as txt: input_routes = txt.read() except FileNotFoundError: - self.logerr(f"/workspace{self.global_route_txt} not found... " - f"current working directory is'{os.getcwd()}'") + self.logerr( + f"/workspace{self.global_route_txt} not found... " + f"current working directory is'{os.getcwd()}'" + ) raise self.logerr("DevRoute: TXT READ") @@ -85,11 +88,11 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: secs = int(route.split("secs: ")[1].split("\n")[0]) nsecs = int(route.split("nsecs:")[1].split("\n")[0]) frame_id = route.split('"')[1] - header_list.append( - Header(seq, rospy.Time(secs, nsecs), frame_id)) + header_list.append(Header(seq, rospy.Time(secs, nsecs), frame_id)) road_options_str = route.split("[")[1].split("]")[0].split(",") - road_options_list.append([int(road_option) - for road_option in road_options_str]) + road_options_list.append( + [int(road_option) for road_option in road_options_str] + ) poses_str = route.split("position:")[1:] poses = [] for pose in poses_str: @@ -105,21 +108,21 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: orientation = Quaternion(x, y, z, w) poses.append(Pose(position, orientation)) poses_list.append(poses) - self.global_plan_pub.publish(header_list[0], road_options_list[0], - poses_list[0]) + self.global_plan_pub.publish( + header_list[0], road_options_list[0], poses_list[0] + ) else: - with open(self.routes, 'r', encoding='utf-8') as file: + with open(self.routes, "r", encoding="utf-8") as file: my_xml = file.read() # Use xmltodict to parse and convert the XML document routes_dict = xmltodict.parse(my_xml) - route = routes_dict['routes']['route'][0] - town = route['@town'] + route = routes_dict["routes"]["route"][0] + town = route["@town"] if town not in data.map_name: - self.logerr(f"Map '{data.map_name}' doesnt match routes " - f"'{town}'") + self.logerr(f"Map '{data.map_name}' doesnt match routes " f"'{town}'") return # Convert data into a carla.Map @@ -130,15 +133,15 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: # plan the route between given waypoints route_trace = [] - waypoints = route['waypoints']['position'] + waypoints = route["waypoints"]["position"] prepoint = waypoints[0] for waypoint in waypoints[1:]: - start = carla.Location(float(prepoint['@x']), - float(prepoint['@y']), - float(prepoint['@z'])) - origin = carla.Location(float(waypoint['@x']), - float(waypoint['@y']), - float(waypoint['@z'])) + start = carla.Location( + float(prepoint["@x"]), float(prepoint["@y"]), float(prepoint["@z"]) + ) + origin = carla.Location( + float(waypoint["@x"]), float(waypoint["@y"]), float(waypoint["@z"]) + ) part_route_trace = grp.trace_route(start, origin) route_trace.extend(part_route_trace) prepoint = waypoint @@ -152,9 +155,11 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: rotation = waypoint.transform.rotation quaternion = tf.transformations.quaternion_from_euler( - rotation.roll, rotation.pitch, rotation.yaw) - orientation = Quaternion(quaternion[0], quaternion[1], - quaternion[2], quaternion[3]) + rotation.roll, rotation.pitch, rotation.yaw + ) + orientation = Quaternion( + quaternion[0], quaternion[1], quaternion[2], quaternion[3] + ) poses.append(Pose(position, orientation)) road_options.append(road_option) @@ -169,10 +174,10 @@ def world_info_callback(self, data: CarlaWorldInfo) -> None: if __name__ == "__main__": """ - main function starts the NavManager node - :param args: + main function starts the NavManager node + :param args: """ - roscomp.init('DevGlobalRoute') + roscomp.init("DevGlobalRoute") try: DevGlobalRoute() diff --git a/code/planning/src/global_planner/global_planner.py b/code/planning/src/global_planner/global_planner.py index e7510ebf..291e161b 100755 --- a/code/planning/src/global_planner/global_planner.py +++ b/code/planning/src/global_planner/global_planner.py @@ -6,7 +6,7 @@ from xml.etree import ElementTree as eTree from geometry_msgs.msg import PoseStamped, Pose, Point, Quaternion -from carla_msgs.msg import CarlaRoute # , CarlaWorldInfo +from carla_msgs.msg import CarlaRoute # , CarlaWorldInfo from nav_msgs.msg import Path from std_msgs.msg import String from std_msgs.msg import Float32MultiArray @@ -34,7 +34,7 @@ class PrePlanner(CompatibleNode): """ def __init__(self): - super(PrePlanner, self).__init__('DevGlobalRoute') + super(PrePlanner, self).__init__("DevGlobalRoute") self.path_backup = Path() @@ -46,36 +46,42 @@ def __init__(self): self.role_name = self.get_param("role_name", "hero") self.control_loop_rate = self.get_param("control_loop_rate", 1) self.distance_spawn_to_first_wp = self.get_param( - "distance_spawn_to_first_wp", 100) + "distance_spawn_to_first_wp", 100 + ) self.map_sub = self.new_subscription( msg_type=String, topic=f"/carla/{self.role_name}/OpenDRIVE", callback=self.world_info_callback, - qos_profile=10) + qos_profile=10, + ) self.global_plan_sub = self.new_subscription( msg_type=CarlaRoute, - topic='/carla/' + self.role_name + '/global_plan', + topic="/carla/" + self.role_name + "/global_plan", callback=self.global_route_callback, - qos_profile=10) + qos_profile=10, + ) self.current_pos_sub = self.new_subscription( msg_type=PoseStamped, topic="/paf/" + self.role_name + "/current_pos", callback=self.position_callback, - qos_profile=1) + qos_profile=1, + ) self.path_pub = self.new_publisher( msg_type=Path, - topic='/paf/' + self.role_name + '/trajectory_global', - qos_profile=1) + topic="/paf/" + self.role_name + "/trajectory_global", + qos_profile=1, + ) self.speed_limit_pub = self.new_publisher( msg_type=Float32MultiArray, topic=f"/paf/{self.role_name}/speed_limits_OpenDrive", - qos_profile=1) - self.logdebug('PrePlanner-Node started') + qos_profile=1, + ) + self.logdebug("PrePlanner-Node started") # uncomment for self.dev_load_world_info() for dev_launch # self.dev_load_world_info() @@ -92,26 +98,33 @@ def global_route_callback(self, data: CarlaRoute) -> None: return if self.odc is None: - self.logwarn("PrePlanner: global route got updated before map... " - "therefore the OpenDriveConverter couldn't be " - "initialised yet") + self.logwarn( + "PrePlanner: global route got updated before map... " + "therefore the OpenDriveConverter couldn't be " + "initialised yet" + ) self.global_route_backup = data return if self.agent_pos is None or self.agent_ori is None: - self.logwarn("PrePlanner: global route got updated before current " - "pose... therefore there is no pose to start with") + self.logwarn( + "PrePlanner: global route got updated before current " + "pose... therefore there is no pose to start with" + ) self.global_route_backup = data return - x_start = self.agent_pos.x # 983.5 - y_start = self.agent_pos.y # -5433.2 + x_start = self.agent_pos.x # 983.5 + y_start = self.agent_pos.y # -5433.2 x_target = data.poses[0].position.x y_target = data.poses[0].position.y - if abs(x_start - x_target) > self.distance_spawn_to_first_wp or \ - abs(y_start - y_target) > self.distance_spawn_to_first_wp: - self.logwarn("PrePlanner: current agent-pose doesnt match the " - "given global route") + if ( + abs(x_start - x_target) > self.distance_spawn_to_first_wp + or abs(y_start - y_target) > self.distance_spawn_to_first_wp + ): + self.logwarn( + "PrePlanner: current agent-pose doesnt match the " "given global route" + ) self.global_route_backup = data return @@ -128,35 +141,45 @@ def global_route_callback(self, data: CarlaRoute) -> None: x_target = None y_target = None - x_turn_follow = data.poses[ind+1].position.x - y_turn_follow = data.poses[ind+1].position.y + x_turn_follow = data.poses[ind + 1].position.x + y_turn_follow = data.poses[ind + 1].position.y # Trajectory for the starting road segment - self.odc.initial_road_trajectory(x_start, y_start, - x_turn, y_turn, - x_turn_follow, y_turn_follow, - x_target, y_target, - 0, data.road_options[0]) + self.odc.initial_road_trajectory( + x_start, + y_start, + x_turn, + y_turn, + x_turn_follow, + y_turn_follow, + x_target, + y_target, + 0, + data.road_options[0], + ) n = len(data.poses) # iterating through global route to create trajectory - for i in range(1, n-1): + for i in range(1, n - 1): # self.loginfo(f"Preplanner going throug global plan {i+1}/{n}") x_target = data.poses[i].position.x y_target = data.poses[i].position.y action = data.road_options[i] - x_target_next = data.poses[i+1].position.x - y_target_next = data.poses[i+1].position.y - self.odc.target_road_trajectory(x_target, y_target, - x_target_next, y_target_next, - action) - - self.odc.target_road_trajectory(data.poses[n-1].position.x, - data.poses[n-1].position.y, - None, None, - data.road_options[n-1]) + x_target_next = data.poses[i + 1].position.x + y_target_next = data.poses[i + 1].position.y + self.odc.target_road_trajectory( + x_target, y_target, x_target_next, y_target_next, action + ) + + self.odc.target_road_trajectory( + data.poses[n - 1].position.x, + data.poses[n - 1].position.y, + None, + None, + data.road_options[n - 1], + ) # trajectory is now stored in the waypoints # waypoints = self.odc.waypoints waypoints = self.odc.remove_outliner(self.odc.waypoints) @@ -170,11 +193,10 @@ def global_route_callback(self, data: CarlaRoute) -> None: stamped_poses = [] for i in range(len(way_x)): position = Point(way_x[i], way_y[i], 0) # way_speed[i]) - quaternion = tf.transformations.quaternion_from_euler(0, - 0, - way_yaw[i]) - orientation = Quaternion(x=quaternion[0], y=quaternion[1], - z=quaternion[2], w=quaternion[3]) + quaternion = tf.transformations.quaternion_from_euler(0, 0, way_yaw[i]) + orientation = Quaternion( + x=quaternion[0], y=quaternion[1], z=quaternion[2], w=quaternion[3] + ) pose = Pose(position, orientation) pos = PoseStamped() pos.header.stamp = rospy.Time.now() @@ -208,8 +230,11 @@ def world_info_callback(self, opendrive: String) -> None: junction_ids = [int(junction.get("id")) for junction in junctions] odc = OpenDriveConverter( - roads=roads, road_ids=road_ids, - junctions=junctions, junction_ids=junction_ids) + roads=roads, + road_ids=road_ids, + junctions=junctions, + junction_ids=junction_ids, + ) odc.convert_roads() odc.convert_junctions() @@ -218,8 +243,9 @@ def world_info_callback(self, opendrive: String) -> None: self.odc = odc if self.global_route_backup is not None: - self.loginfo("PrePlanner: Received a map update retrying " - "route preplanning") + self.loginfo( + "PrePlanner: Received a map update retrying " "route preplanning" + ) self.global_route_callback(self.global_route_backup) def position_callback(self, data: PoseStamped): @@ -232,17 +258,17 @@ def position_callback(self, data: PoseStamped): self.agent_pos = data.pose.position self.agent_ori = data.pose.orientation if self.global_route_backup is not None: - self.loginfo("PrePlanner: Received a pose update retrying " - "route preplanning") + self.loginfo( + "PrePlanner: Received a pose update retrying " "route preplanning" + ) try: self.global_route_callback(self.global_route_backup) except Exception: self.logerr("Preplanner failed -> restart") def dev_load_world_info(self): - file_path = \ - "/workspace/code/planning/src/global_planner/string_world_info.txt" - with open(file_path, 'r') as file: + file_path = "/workspace/code/planning/src/global_planner/string_world_info.txt" + with open(file_path, "r") as file: file_content = file.read() self.logerr("DATA READ") self.world_info_callback(file_content) @@ -260,7 +286,7 @@ def run(self): main function starts the PrePlanner node :param args: """ - roscomp.init('PrePlanner') + roscomp.init("PrePlanner") try: node = PrePlanner() diff --git a/code/planning/src/global_planner/help_functions.py b/code/planning/src/global_planner/help_functions.py index a9f13913..9c419e70 100755 --- a/code/planning/src/global_planner/help_functions.py +++ b/code/planning/src/global_planner/help_functions.py @@ -23,8 +23,7 @@ def euclid_dist(vector1: Tuple[float, float], vector2: Tuple[float, float]): return np.sqrt(sum_sqrt) -def unit_vector(vector: Tuple[float, float], size: float)\ - -> Tuple[float, float]: +def unit_vector(vector: Tuple[float, float], size: float) -> Tuple[float, float]: """ Calculate the unit vector. :param vector: input vector for calculation @@ -35,8 +34,7 @@ def unit_vector(vector: Tuple[float, float], size: float)\ return size * (vector[0] / length), size * (vector[1] / length) -def perpendicular_vector_right(vector: Tuple[float, float])\ - -> Tuple[float, float]: +def perpendicular_vector_right(vector: Tuple[float, float]) -> Tuple[float, float]: """ Perpendicular vector on the right side :param vector: input vector @@ -47,8 +45,7 @@ def perpendicular_vector_right(vector: Tuple[float, float])\ return perp -def perpendicular_vector_left(vector: Tuple[float, float])\ - -> Tuple[float, float]: +def perpendicular_vector_left(vector: Tuple[float, float]) -> Tuple[float, float]: """ Perpendicular vector on the left side :param vector: input vector @@ -59,8 +56,9 @@ def perpendicular_vector_left(vector: Tuple[float, float])\ return perp -def add_vector(v_1: Tuple[float, float], v_2: Tuple[float, float]) \ - -> Tuple[float, float]: +def add_vector( + v_1: Tuple[float, float], v_2: Tuple[float, float] +) -> Tuple[float, float]: """ Addition of two vectors :param v_1: first vector with x and y coordinate @@ -70,8 +68,9 @@ def add_vector(v_1: Tuple[float, float], v_2: Tuple[float, float]) \ return v_1[0] + v_2[0], v_1[1] + v_2[1] -def sub_vector(v_1: Tuple[float, float], v_2: Tuple[float, float]) \ - -> Tuple[float, float]: +def sub_vector( + v_1: Tuple[float, float], v_2: Tuple[float, float] +) -> Tuple[float, float]: """ Subtraction of two vectors :param v_1: first vector with x and y coordinate @@ -81,16 +80,17 @@ def sub_vector(v_1: Tuple[float, float], v_2: Tuple[float, float]) \ return v_1[0] - v_2[0], v_1[1] - v_2[1] -def rotate_vector(vector: Tuple[float, float], angle_rad: float) \ - -> Tuple[float, float]: +def rotate_vector(vector: Tuple[float, float], angle_rad: float) -> Tuple[float, float]: """ Rotate the given vector by an angle with the rotationmatrix :param vector: input vector with x and y coordinate :param angle_rad: rotation angle in rad :return: resulting vector """ - return (cos(angle_rad) * vector[0] - sin(angle_rad) * vector[1], - sin(angle_rad) * vector[0] + cos(angle_rad) * vector[1]) + return ( + cos(angle_rad) * vector[0] - sin(angle_rad) * vector[1], + sin(angle_rad) * vector[0] + cos(angle_rad) * vector[1], + ) def direction_vector(angle_rad: float) -> Tuple[float, float]: @@ -102,8 +102,7 @@ def direction_vector(angle_rad: float) -> Tuple[float, float]: return (cos(angle_rad), sin(angle_rad)) -def scale_vector(vector: Tuple[float, float], new_len: float) \ - -> Tuple[float, float]: +def scale_vector(vector: Tuple[float, float], new_len: float) -> Tuple[float, float]: """ Amplify the length of the given vector :param vector: input vector with x and y coordinate @@ -113,8 +112,7 @@ def scale_vector(vector: Tuple[float, float], new_len: float) \ old_len = vector_len(vector) if old_len == 0: return (0, 0) - scaled_vector = (vector[0] * new_len / old_len, - vector[1] * new_len / old_len) + scaled_vector = (vector[0] * new_len / old_len, vector[1] * new_len / old_len) return scaled_vector @@ -124,11 +122,12 @@ def vector_len(vec: Tuple[float, float]) -> float: :param vec: input vector with x and y coordinate :return: length of the vector """ - return sqrt(vec[0]**2 + vec[1]**2) + return sqrt(vec[0] ** 2 + vec[1] ** 2) -def points_to_vector(p_1: Tuple[float, float], p_2: Tuple[float, float]) \ - -> Tuple[float, float]: +def points_to_vector( + p_1: Tuple[float, float], p_2: Tuple[float, float] +) -> Tuple[float, float]: """ Create the vector starting at p1 and ending at p2 :param p_1: first input vector @@ -138,8 +137,9 @@ def points_to_vector(p_1: Tuple[float, float], p_2: Tuple[float, float]) \ return p_2[0] - p_1[0], p_2[1] - p_1[1] -def end_of_circular_arc(start_point: Tuple[float, float], angle: float, - length: float, radius: float) -> Tuple[float, float]: +def end_of_circular_arc( + start_point: Tuple[float, float], angle: float, length: float, radius: float +) -> Tuple[float, float]: """ Compute the end of a circular arc :param start_point: starting point with x and y coordinate @@ -161,9 +161,9 @@ def end_of_circular_arc(start_point: Tuple[float, float], angle: float, return add_vector(start_point, diff_vec) -def circular_interpolation(start: Tuple[float, float], - end: Tuple[float, float], - arc_radius: float): +def circular_interpolation( + start: Tuple[float, float], end: Tuple[float, float], arc_radius: float +): """ Interpolate points between start / end point on top of the circular arc given by the arc radius @@ -184,11 +184,9 @@ def circular_interpolation(start: Tuple[float, float], # construct the mid-perpendicular of |start, end| to determine the # circle's center conn_middle = ((start[0] + end[0]) / 2, (start[1] + end[1]) / 2) - center_offset = sqrt(pow(arc_radius, 2) - pow(euclid_dist(start, end) / - 2, 2)) - mid_perpend = rotate_vector(points_to_vector(start, end), pi/2 * sign) - circle_center = add_vector(conn_middle, scale_vector(mid_perpend, - center_offset)) + center_offset = sqrt(pow(arc_radius, 2) - pow(euclid_dist(start, end) / 2, 2)) + mid_perpend = rotate_vector(points_to_vector(start, end), pi / 2 * sign) + circle_center = add_vector(conn_middle, scale_vector(mid_perpend, center_offset)) # partition the arc into steps (-> interpol. geometries) arc_circumference = arc_radius * angle @@ -197,15 +195,17 @@ def circular_interpolation(start: Tuple[float, float], # compute the interpolated points on the circle arc vec = points_to_vector(circle_center, start) - rot_angles = [angle * (i / num_steps) for i in range(num_steps+1)] - points = [add_vector(circle_center, rotate_vector(vec, rot * sign)) - for rot in rot_angles] + rot_angles = [angle * (i / num_steps) for i in range(num_steps + 1)] + points = [ + add_vector(circle_center, rotate_vector(vec, rot * sign)) for rot in rot_angles + ] return points -def linear_interpolation(start: Tuple[float, float], end: Tuple[float, float], - interval_m: float): +def linear_interpolation( + start: Tuple[float, float], end: Tuple[float, float], interval_m: float +): """ Interpolate linearly between the given start / end point by putting points according to the interval specified @@ -220,10 +220,14 @@ def linear_interpolation(start: Tuple[float, float], end: Tuple[float, float], steps = max(1, floor(distance / interval_m)) exceeds_interval = distance > interval_m - step_vector = (vector[0] / steps if exceeds_interval else vector[0], - vector[1] / steps if exceeds_interval else vector[1]) - - lin_points = [(start[0] + step_vector[0] * i, - start[1] + step_vector[1] * i) for i in range(steps)] + step_vector = ( + vector[0] / steps if exceeds_interval else vector[0], + vector[1] / steps if exceeds_interval else vector[1], + ) + + lin_points = [ + (start[0] + step_vector[0] * i, start[1] + step_vector[1] * i) + for i in range(steps) + ] lin_points.append(end) return lin_points diff --git a/code/planning/src/global_planner/preplanning_trajectory.py b/code/planning/src/global_planner/preplanning_trajectory.py index c0233918..478addd7 100755 --- a/code/planning/src/global_planner/preplanning_trajectory.py +++ b/code/planning/src/global_planner/preplanning_trajectory.py @@ -37,19 +37,21 @@ class OpenDriveConverter: The OpenDriveConverter needs am OpenDrive Map and the road options from the Carla leaderboard to calculate the global trajectory. """ - def __init__(self, path=None, roads=None, road_ids=None, - junctions=None, junction_ids=None): + + def __init__( + self, path=None, roads=None, road_ids=None, junctions=None, junction_ids=None + ): if roads is None or road_ids is None: - self.roads, self.road_ids = self.list_xodr_properties( - path, name="road") + self.roads, self.road_ids = self.list_xodr_properties(path, name="road") else: self.roads = roads self.road_ids = road_ids if junctions is None or junction_ids is None: self.junctions, self.junction_ids = self.list_xodr_properties( - path, name="junction") + path, name="junction" + ) else: self.junctions = junctions self.junction_ids = junction_ids @@ -90,11 +92,11 @@ def __init__(self, path=None, roads=None, road_ids=None, """ def list_xodr_properties(self, path: str, name: str): - """ Filter properties out of the xodr file - :param path: reference to the xodr file - :param name: name of the property to filter - :return elements: list of the preferred elements - :return element_ids: list of the id values for each element + """Filter properties out of the xodr file + :param path: reference to the xodr file + :param name: name of the property to filter + :return elements: list of the preferred elements + :return element_ids: list of the id values for each element """ # find reference to root node of xodr file root = eTree.parse(path).getroot() @@ -103,7 +105,7 @@ def list_xodr_properties(self, path: str, name: str): return elements, element_ids def convert_roads(self): - """ Filter all road elements in a list. Every road index belongs + """Filter all road elements in a list. Every road index belongs to the same list index. """ max_id = int(max(self.road_ids)) @@ -118,7 +120,7 @@ def convert_roads(self): self.roads = roads_extracted def convert_junctions(self): - """ Filter all junction elements in a list. Every junction index + """Filter all junction elements in a list. Every junction index belongs to the same list index. """ max_id = int(max(self.junction_ids)) @@ -133,7 +135,7 @@ def convert_junctions(self): self.junctions = junctions_extracted def filter_geometry(self): - """ Extract all the geometry information for each road + """Extract all the geometry information for each road use the initialised roads object from the init function this function is only used once when the OpenDrive map is received """ @@ -163,11 +165,11 @@ def filter_geometry(self): else: curvature.append(0.0) geometry_data.append([x, y, heading, curvature, length]) - assert (len(self.roads) == len(geometry_data)) + assert len(self.roads) == len(geometry_data) self.geometry_data = geometry_data def find_current_road(self, x_curr: float, y_curr: float): - """ Extract the current road that fits to the x and y coordinate + """Extract the current road that fits to the x and y coordinate Needed to find the starting road of the agent and for every new waypoint we receive from Carla. :param x_curr: the current global x position of the agent @@ -182,8 +184,9 @@ def find_current_road(self, x_curr: float, y_curr: float): j += 1 continue for i in range(len(road[0])): - diff = help_functions.euclid_dist((road[0][i], road[1][i]), - (x_curr, y_curr)) + diff = help_functions.euclid_dist( + (road[0][i], road[1][i]), (x_curr, y_curr) + ) diff_list.append(diff) diff_index_list.append(j) j += 1 @@ -195,21 +198,26 @@ def find_current_road(self, x_curr: float, y_curr: float): predecessor, successor = self.get_pred_succ(selected_road_id) # Successor and predecessor are a junction -> selected road correct - if self.geometry_data[predecessor] is None and \ - self.geometry_data[successor] is None: + if ( + self.geometry_data[predecessor] is None + and self.geometry_data[successor] is None + ): current_id = selected_road_id # no junction recognized or current road is junction else: - current_id = self.calculate_intervalls_id(agent_position, - selected_road_id, - predecessor, - successor, - junction) + current_id = self.calculate_intervalls_id( + agent_position, selected_road_id, predecessor, successor, junction + ) return current_id - def calculate_intervalls_id(self, agent: Tuple[float, float], - current: int, pred: int, succ: int, - junction: int): + def calculate_intervalls_id( + self, + agent: Tuple[float, float], + current: int, + pred: int, + succ: int, + junction: int, + ): """ The function assumes, that the current chosen road is not the only one that is possible. The current raad is calculated based on all @@ -234,8 +242,7 @@ def calculate_intervalls_id(self, agent: Tuple[float, float], else: id_value = pred # Only one possible other road and current road is not a junction - elif self.geometry_data[pred] is None or self.geometry_data[succ] \ - is None: + elif self.geometry_data[pred] is None or self.geometry_data[succ] is None: if pred is None: road = succ else: @@ -278,9 +285,8 @@ def calculate_intervalls_id(self, agent: Tuple[float, float], id_value = self.get_special_case_id(first, second, agent) return id_value - def get_special_case_id(self, road: int, current: int, - agent: Tuple[float, float]): - """ When the function get_min_dist() returns two solutions with the + def get_special_case_id(self, road: int, current: int, agent: Tuple[float, float]): + """When the function get_min_dist() returns two solutions with the same distance, this function calculated the distance based on the interpolation of the two possible roads. :param road: id value of the successor or predecessor road @@ -292,12 +298,14 @@ def get_special_case_id(self, road: int, current: int, list_r = self.interpolation(road) list_c = self.interpolation(current) - dist_r = [help_functions.euclid_dist( - agent, (list_r[0][i], list_r[1][i])) - for i in range(len(list_r[0]))] - dist_c = [help_functions.euclid_dist( - agent, (list_c[0][i], list_c[1][i])) - for i in range(len(list_r[0]))] + dist_r = [ + help_functions.euclid_dist(agent, (list_r[0][i], list_r[1][i])) + for i in range(len(list_r[0])) + ] + dist_c = [ + help_functions.euclid_dist(agent, (list_c[0][i], list_c[1][i])) + for i in range(len(list_r[0])) + ] value_r = min(dist_r) value_c = min(dist_c) if value_r < value_c: @@ -307,7 +315,7 @@ def get_special_case_id(self, road: int, current: int, return final_id def get_min_dist(self, dist: list): - """ Calculate the minimum distance value from a distance list. + """Calculate the minimum distance value from a distance list. :param dist: list containing all distance values :return: min_dist: the minimum distance/distances from the list """ @@ -323,7 +331,7 @@ def get_min_dist(self, dist: list): return min_dist def get_dist_list(self, pred, current, succ, agent): - """ Calculate the distances between the endpoints of a possible road + """Calculate the distances between the endpoints of a possible road and the current agent position. :param pred: id value of the predecessor road :param current: id value of the assumed road @@ -344,12 +352,20 @@ def get_dist_list(self, pred, current, succ, agent): dist.append(end_d) return dist - def initial_road_trajectory(self, x_curr: float, y_curr: float, - x_target: float, y_target: float, - x_next_t: float, y_next_t: float, - x_first_t: float, y_first_t: float, - yaw: int, command: int): - """ Create the trajectory on the initial road. + def initial_road_trajectory( + self, + x_curr: float, + y_curr: float, + x_target: float, + y_target: float, + x_next_t: float, + y_next_t: float, + x_first_t: float, + y_first_t: float, + yaw: int, + command: int, + ): + """Create the trajectory on the initial road. The agent has to be located on the map. This case has some special requirements. We have to define the driving direction and the trajectory to the next road segment. The start case @@ -369,26 +385,26 @@ def initial_road_trajectory(self, x_curr: float, y_curr: float, junction. :param command: next action from the leaderboard """ - self.road_id = self.find_current_road(x_curr=x_curr, - y_curr=y_curr) + self.road_id = self.find_current_road(x_curr=x_curr, y_curr=y_curr) self.old_id = self.road_id predecessor, successor = self.get_pred_succ(road_id=self.road_id) - self.follow_id, follow_section_id = self.\ - get_initial_next_road_id(predecessor=predecessor, - successor=successor, - x_target=x_target, y_target=y_target, - yaw=yaw) + self.follow_id, follow_section_id = self.get_initial_next_road_id( + predecessor=predecessor, + successor=successor, + x_target=x_target, + y_target=y_target, + yaw=yaw, + ) agent_position = (x_curr, y_curr) # Interpolate the road_id points = self.interpolation(self.road_id) - points = self.check_point_order(points=points, - x_target=x_next_t, - y_target=y_next_t) + points = self.check_point_order( + points=points, x_target=x_next_t, y_target=y_next_t + ) self.reference = copy.deepcopy(points) widths = self.lane_widths(self.road_id) self.width = widths[-1] - self.direction = self.right_or_left(points, x_curr, y_curr, - self.width) + self.direction = self.right_or_left(points, x_curr, y_curr, self.width) points = self.calculate_midpoints(points) # Check if lane change on first road is needed if x_first_t is None and y_first_t is None: @@ -409,11 +425,20 @@ def initial_road_trajectory(self, x_curr: float, y_curr: float, # last points new based on the target lane for the next action if min_dist <= TARGET_DIFF: p = points.copy() - points = self.target_reached(x_target, y_target, - x_next_t, y_next_t, command, - index, self.reference, - follow_section_id, p, - widths, self.direction, True) + points = self.target_reached( + x_target, + y_target, + x_next_t, + y_next_t, + command, + index, + self.reference, + follow_section_id, + p, + widths, + self.direction, + True, + ) # Find and remove the points that are not needed to pass the # first road from the agent position to the end of the road min_dist = float("inf") @@ -436,7 +461,7 @@ def initial_road_trajectory(self, x_curr: float, y_curr: float, self.waypoints = points def calculate_midpoints(self, points: list): - """ Calculate the trajectory points in the middle + """Calculate the trajectory points in the middle of a lane and return the points :param points: list of all trajectory points to reach the following road @@ -453,10 +478,15 @@ def calculate_midpoints(self, points: list): points = self.update_points(points, self.direction, self.width) return points - def target_road_trajectory(self, x_target: float, y_target: float, - x_next_t: float, y_next_t: float, - command: int): - """ Calculate the trajectory to the next waypoint + def target_road_trajectory( + self, + x_target: float, + y_target: float, + x_next_t: float, + y_next_t: float, + command: int, + ): + """Calculate the trajectory to the next waypoint The waypoints are given by the Carla Leaderboard and the next action for the agent :param x_target: x position of the target waypoint @@ -482,17 +512,18 @@ def target_road_trajectory(self, x_target: float, y_target: float, if command <= 3: if x_next_t is not None and y_next_t is not None: predecessor, successor = self.get_pred_succ( - road_id=self.road_id) - follow, follow_section_id = self. \ - get_initial_next_road_id(predecessor=predecessor, - successor=successor, - x_target=x_next_t, - y_target=y_next_t, - yaw=self.pt[2][-1]) + road_id=self.road_id + ) + follow, follow_section_id = self.get_initial_next_road_id( + predecessor=predecessor, + successor=successor, + x_target=x_next_t, + y_target=y_next_t, + yaw=self.pt[2][-1], + ) self.follow_id = self.next_action_id( - x_next_t, y_next_t, - follow_section_id, - self.pt) + x_next_t, y_next_t, follow_section_id, self.pt + ) break # command is lane action else: @@ -505,13 +536,20 @@ def target_road_trajectory(self, x_target: float, y_target: float, min_dist = dist index = i widths = self.lane_widths(self.road_id) - points = self.target_reached(x_target, y_target, - x_next_t, y_next_t, - command, index, - self.reference_l, - self.follow_section, - self.pt, widths, - self.direction, False) + points = self.target_reached( + x_target, + y_target, + x_next_t, + y_next_t, + command, + index, + self.reference_l, + self.follow_section, + self.pt, + widths, + self.direction, + False, + ) points = copy.deepcopy(points) start = len(self.waypoints[0]) - len(points[0]) for i in range(len(points)): @@ -524,19 +562,19 @@ def target_road_trajectory(self, x_target: float, y_target: float, break else: self.road_id = self.follow_id - predecessor, successor = self.get_pred_succ( - road_id=self.road_id) - self.follow_id, follow_section_id = self. \ - get_initial_next_road_id(predecessor=predecessor, - successor=successor, - x_target=x_target, - y_target=y_target, - yaw=self.pt[2][-1]) + predecessor, successor = self.get_pred_succ(road_id=self.road_id) + self.follow_id, follow_section_id = self.get_initial_next_road_id( + predecessor=predecessor, + successor=successor, + x_target=x_target, + y_target=y_target, + yaw=self.pt[2][-1], + ) # Interpolate the road_id points = self.interpolation(self.road_id) - points = self.check_point_order(points=points, - x_target=x_target, - y_target=y_target) + points = self.check_point_order( + points=points, x_target=x_target, y_target=y_target + ) self.reference[0] += copy.deepcopy(points[0]) self.reference[1] += copy.deepcopy(points[1]) self.reference[2] += copy.deepcopy(points[2]) @@ -554,10 +592,15 @@ def target_road_trajectory(self, x_target: float, y_target: float, w_min = None for width in widths: p, v = self.update_one_point( - points[0][0], points[1][0], - points[0][1], points[1][1], - points[0][2], points[1][2], - self.direction, width) + points[0][0], + points[1][0], + points[0][1], + points[1][1], + points[0][2], + points[1][2], + self.direction, + width, + ) diff = help_functions.euclid_dist(p, last_p) if diff < min_diff: min_diff = diff @@ -575,10 +618,9 @@ def target_road_trajectory(self, x_target: float, y_target: float, points = self.calculate_midpoints(points) if command == LEFT or command == RIGHT or command == STRAIGHT: if x_next_t is not None and y_next_t is not None: - self.follow_id = self.next_action_id(x_next_t, - y_next_t, - follow_section_id, - points) + self.follow_id = self.next_action_id( + x_next_t, y_next_t, follow_section_id, points + ) self.pt = points self.reference_l = reference_line self.follow_section = follow_section_id @@ -592,12 +634,20 @@ def target_road_trajectory(self, x_target: float, y_target: float, if dist < min_dist: min_dist = dist index = i - points = self.target_reached(x_target, y_target, - x_next_t, y_next_t, - command, - index, reference_line, - follow_section_id, points, - widths, self.direction, False) + points = self.target_reached( + x_target, + y_target, + x_next_t, + y_next_t, + command, + index, + reference_line, + follow_section_id, + points, + widths, + self.direction, + False, + ) self.add_waypoints(points) self.pt = points self.old_id = self.road_id @@ -608,13 +658,22 @@ def target_road_trajectory(self, x_target: float, y_target: float, self.add_waypoints(points) break - def target_reached(self, target_x: float, target_y: float, - x_next_t: float, y_next_t: float, - command: int, index: int, - reference_line, follow_section_id: int, - points_calc, widths: list, - direction: bool, initial: bool): - """ If a lane change is detected, the trajectory needs to be + def target_reached( + self, + target_x: float, + target_y: float, + x_next_t: float, + y_next_t: float, + command: int, + index: int, + reference_line, + follow_section_id: int, + points_calc, + widths: list, + direction: bool, + initial: bool, + ): + """If a lane change is detected, the trajectory needs to be interpolated again and update the waypoints where the lane change should occur. :param target_x: x coordinate of the target point @@ -631,8 +690,11 @@ def target_reached(self, target_x: float, target_y: float, :param initial: indicates if it is the first interpolated road or not :return: points_calc: new calculated points """ - if command == CHANGE_LEFT or \ - command == CHANGE_RIGHT or command == CHANGE_FOLLOW: + if ( + command == CHANGE_LEFT + or command == CHANGE_RIGHT + or command == CHANGE_FOLLOW + ): if command == CHANGE_LEFT: points = reference_line last_width = self.width @@ -640,12 +702,11 @@ def target_reached(self, target_x: float, target_y: float, ind = widths.index(last_width) if ind == 0: return points_calc - new_width = widths[ind-1] + new_width = widths[ind - 1] self.width = new_width diff = abs(last_width - new_width) steps = int(diff / step_size) - first_widths = [last_width - step_size * i - for i in range(steps)] + first_widths = [last_width - step_size * i for i in range(steps)] for i in range(len(first_widths)): p1_x = points[0][index + i] p1_y = points[1][index + i] @@ -654,14 +715,12 @@ def target_reached(self, target_x: float, target_y: float, if i != len((points[0])) - 2: p3_x = points[0][index + i + 2] p3_y = points[1][index + i + 2] - point, v = self.update_one_point(p1_x, p1_y, - p2_x, p2_y, - p3_x, p3_y, - direction, - first_widths[i]) + point, v = self.update_one_point( + p1_x, p1_y, p2_x, p2_y, p3_x, p3_y, direction, first_widths[i] + ) points_calc[0][index + i] = point[0] points_calc[1][index + i] = point[1] - for i in range(index + len(first_widths), len(points[0])-1): + for i in range(index + len(first_widths), len(points[0]) - 1): p1_x = points[0][i] p1_y = points[1][i] p2_x = points[0][i + 1] @@ -672,18 +731,21 @@ def target_reached(self, target_x: float, target_y: float, else: p3_x = points[0][-1] p3_y = points[1][-1] - point, v = self.update_one_point(p1_x, p1_y, - p2_x, p2_y, - p3_x, p3_y, - direction, - new_width) + point, v = self.update_one_point( + p1_x, p1_y, p2_x, p2_y, p3_x, p3_y, direction, new_width + ) points_calc[0][i] = point[0] points_calc[1][i] = point[1] - point, v = self.update_one_point(p2_x, p2_y, - target_x, target_y, - target_x, target_y, - direction, - new_width) + point, v = self.update_one_point( + p2_x, + p2_y, + target_x, + target_y, + target_x, + target_y, + direction, + new_width, + ) points_calc[0][i + 1] = point[0] points_calc[1][i + 1] = point[1] # change lane right @@ -698,8 +760,7 @@ def target_reached(self, target_x: float, target_y: float, self.width = new_width diff = abs(last_width - new_width) steps = int(diff / step_size) - first_widths = [last_width + step_size * i - for i in range(steps)] + first_widths = [last_width + step_size * i for i in range(steps)] for i in range(len(first_widths)): p1_x = points[0][index + i] p1_y = points[1][index + i] @@ -708,14 +769,12 @@ def target_reached(self, target_x: float, target_y: float, if i != len((points[0])) - 2: p3_x = points[0][index + i + 2] p3_y = points[1][index + i + 2] - point, v = self.update_one_point(p1_x, p1_y, - p2_x, p2_y, - p3_x, p3_y, - direction, - first_widths[i]) + point, v = self.update_one_point( + p1_x, p1_y, p2_x, p2_y, p3_x, p3_y, direction, first_widths[i] + ) points_calc[0][index + i] = point[0] points_calc[1][index + i] = point[1] - for i in range(index + len(first_widths), len(points[0])-1): + for i in range(index + len(first_widths), len(points[0]) - 1): p1_x = points[0][i] p1_y = points[1][i] p2_x = points[0][i + 1] @@ -723,45 +782,48 @@ def target_reached(self, target_x: float, target_y: float, if i != len((points[0])) - 2: p3_x = points[0][i + 2] p3_y = points[1][i + 2] - point, v = self.update_one_point(p1_x, p1_y, - p2_x, p2_y, - p3_x, p3_y, - direction, - new_width) + point, v = self.update_one_point( + p1_x, p1_y, p2_x, p2_y, p3_x, p3_y, direction, new_width + ) points_calc[0][i] = point[0] points_calc[1][i] = point[1] - point, v = self.update_one_point(p2_x, p2_y, - target_x, target_y, - target_x, target_y, - direction, - new_width) + point, v = self.update_one_point( + p2_x, + p2_y, + target_x, + target_y, + target_x, + target_y, + direction, + new_width, + ) points_calc[0][i + 1] = point[0] points_calc[1][i + 1] = point[1] # passing a junction action else: if x_next_t is not None and y_next_t is not None: - predecessor, successor = self.get_pred_succ( - road_id=self.road_id) - follow, follow_section_id = self. \ - get_initial_next_road_id(predecessor=predecessor, - successor=successor, - x_target=x_next_t, - y_target=y_next_t, - yaw=self.pt[2][-1]) - self.follow_id = self.next_action_id(x_next_t, y_next_t, - follow_section_id, - points_calc) + predecessor, successor = self.get_pred_succ(road_id=self.road_id) + follow, follow_section_id = self.get_initial_next_road_id( + predecessor=predecessor, + successor=successor, + x_target=x_next_t, + y_target=y_next_t, + yaw=self.pt[2][-1], + ) + self.follow_id = self.next_action_id( + x_next_t, y_next_t, follow_section_id, points_calc + ) if initial is False: - del points_calc[0][index + 1:] - del points_calc[1][index + 1:] - del points_calc[2][index + 1:] - del points_calc[3][index + 1:] + del points_calc[0][index + 1 :] + del points_calc[1][index + 1 :] + del points_calc[2][index + 1 :] + del points_calc[3][index + 1 :] return points_calc def rad_to_degree(self, radians): - """ Convert radians value to degrees - :param radians: heading value in rad - :return: deg: degree value + """Convert radians value to degrees + :param radians: heading value in rad + :return: deg: degree value """ radians = abs(radians) deg = degrees(radians) @@ -769,9 +831,10 @@ def rad_to_degree(self, radians): deg = 360 - degrees(radians) return deg - def next_action_id(self, x_next_t: float, y_next_t: float, - sec_id: int, points: list): - """ Calculate the next road id for the given action from + def next_action_id( + self, x_next_t: float, y_next_t: float, sec_id: int, points: list + ): + """Calculate the next road id for the given action from the leaderboard :param x_next_t: x coordinate of the next target point :param y_next_t: y coordinate of the next target point @@ -791,22 +854,25 @@ def next_action_id(self, x_next_t: float, y_next_t: float, if current_road is None: junction = sec_id incoming_road = self.road_id - possible_road_ids = self.filter_road_ids(junction, - incoming_road) + possible_road_ids = self.filter_road_ids(junction, incoming_road) last_point_x = points[0][-1] last_point_y = points[1][-1] last_point = (last_point_x, last_point_y) - action_id = self.calculate_action_id(possible_road_ids, - last_point, - x_next_t, y_next_t) + action_id = self.calculate_action_id( + possible_road_ids, last_point, x_next_t, y_next_t + ) else: action_id = self.follow_id return action_id - def calculate_action_id(self, possible_road_ids: list, - last_point: Tuple[float, float], - x_next_t: float, y_next_t: float): - """ Calculate the next road to take from the junction based + def calculate_action_id( + self, + possible_road_ids: list, + last_point: Tuple[float, float], + x_next_t: float, + y_next_t: float, + ): + """Calculate the next road to take from the junction based on the next action from the leaderboard :param possible_road_ids: list of the next possible road ids :param last_point: last calculated point of the trajectory @@ -836,7 +902,7 @@ def calculate_action_id(self, possible_road_ids: list, return possible_road_ids[index] def filter_road_ids(self, junction: int, incoming: int): - """ Filter the road id values of all connecting roads + """Filter the road id values of all connecting roads that are linked to the incoming road :param junction: id value of the junction :param incoming: id value of the incoming road @@ -853,9 +919,9 @@ def filter_road_ids(self, junction: int, incoming: int): return road_ids def lane_widths(self, road_id: int): - """ Filter all lane width values from a given road - :param road_id: the id value of the examined road - :return: widths: list of all width values + """Filter all lane width values from a given road + :param road_id: the id value of the examined road + :return: widths: list of all width values """ road = self.roads[road_id] lanes = road.find("lanes") @@ -902,9 +968,8 @@ def lane_widths(self, road_id: int): widths.append(middle + width * i) return widths - def right_or_left(self, points: list, x_agent: float, y_agent: float, - width: float): - """ Define on which side of the reference line the trajectory + def right_or_left(self, points: list, x_agent: float, y_agent: float, width: float): + """Define on which side of the reference line the trajectory is running. If it returns true the update point function will choose the correct function for the update of the trajectory points @@ -928,14 +993,12 @@ def right_or_left(self, points: list, x_agent: float, y_agent: float, y_follow = points[1][1] point_x = points[0][2] point_y = points[1][2] - point1, v = self.update_one_point(x_start, y_start, - x_follow, y_follow, - point_x, point_y, - True, width) - point2, v = self.update_one_point(x_start, y_start, - x_follow, y_follow, - point_x, point_y, - False, width) + point1, v = self.update_one_point( + x_start, y_start, x_follow, y_follow, point_x, point_y, True, width + ) + point2, v = self.update_one_point( + x_start, y_start, x_follow, y_follow, point_x, point_y, False, width + ) agent = (x_agent, y_agent) dist_1 = help_functions.euclid_dist(point1, agent) dist_2 = help_functions.euclid_dist(point2, agent) @@ -943,11 +1006,18 @@ def right_or_left(self, points: list, x_agent: float, y_agent: float, direction = False return direction - def update_one_point(self, point1_x: float, point1_y: float, - point2_x: float, point2_y: float, - point3_x: float, point3_y: float, - right: bool, width: float): - """ Update the coordinates of a point width the given + def update_one_point( + self, + point1_x: float, + point1_y: float, + point2_x: float, + point2_y: float, + point3_x: float, + point3_y: float, + right: bool, + width: float, + ): + """Update the coordinates of a point width the given width value for the correct lane :param point1_x: x coordinate of the point to update :param point1_y: y coordinate of the point to update @@ -975,7 +1045,7 @@ def update_one_point(self, point1_x: float, point1_y: float, return point, vector def update_points(self, p_list: list, right: bool, width: float): - """ Update the coordinates of a point list width the given + """Update the coordinates of a point list width the given width value for the correct lane :param p_list: list of all trajectory points to reach the following road @@ -999,10 +1069,9 @@ def update_points(self, p_list: list, right: bool, width: float): if i != len((p_list[0])) - 2: point3_x = p_list[0][i + 2] point3_y = p_list[1][i + 2] - point, vector = self.update_one_point(point1_x, point1_y, - point2_x, point2_y, - point3_x, point3_y, - right, width) + point, vector = self.update_one_point( + point1_x, point1_y, point2_x, point2_y, point3_x, point3_y, right, width + ) p_list[0][i] = point[0] p_list[1][i] = point[1] point = (p_list[0][i + 1], p_list[1][i + 1]) @@ -1012,13 +1081,13 @@ def update_points(self, p_list: list, right: bool, width: float): return p_list def add_waypoints(self, points: list): - """ Add calculated points to the trajectory list - :param points: list of all trajectory points - format: [x_points, y_points, heading, speed] - x_points: list of x_values (m) - y_points: list of y_values (m) - heading: list of yaw values (rad) - speed: list of speed limitations (m/s) + """Add calculated points to the trajectory list + :param points: list of all trajectory points + format: [x_points, y_points, heading, speed] + x_points: list of x_values (m) + y_points: list of y_values (m) + heading: list of yaw values (rad) + speed: list of speed limitations (m/s) """ x = copy.deepcopy(points[0]) y = copy.deepcopy(points[1]) @@ -1029,18 +1098,17 @@ def add_waypoints(self, points: list): self.waypoints[2] += hdg self.waypoints[3] += speed - def check_point_order(self, points: list, x_target: float, - y_target: float): - """ Check if the trajectory points have the correct order - :param points: list of all trajectory points - format: [x_points, y_points, heading, speed] - x_points: list of x_values (m) - y_points: list of y_values (m) - heading: list of yaw values (rad) - speed: list of speed limitations (m/s) - :param x_target: x coordinate of the target point - :param y_target: y coordinate of the target point - :return: points: same format as the parameter points + def check_point_order(self, points: list, x_target: float, y_target: float): + """Check if the trajectory points have the correct order + :param points: list of all trajectory points + format: [x_points, y_points, heading, speed] + x_points: list of x_values (m) + y_points: list of y_values (m) + heading: list of yaw values (rad) + speed: list of speed limitations (m/s) + :param x_target: x coordinate of the target point + :param y_target: y coordinate of the target point + :return: points: same format as the parameter points """ target = (x_target, y_target) start_x = points[0][0] @@ -1069,9 +1137,9 @@ def check_point_order(self, points: list, x_target: float, return points def get_speed(self, road_id: int): - """ Filter and calculate the max_speed for the road - :param road_id: id value for the road - :return: speed: speed value for the road in m/s + """Filter and calculate the max_speed for the road + :param road_id: id value for the road + :return: speed: speed value for the road in m/s """ road = self.roads[road_id] speed_type = road.find("type").find("speed").get("unit") @@ -1087,15 +1155,15 @@ def get_speed(self, road_id: int): return speed def interpolation(self, road_id: int): - """ Interpolate over a complete road - :param road_id: id value of the current road - :return: waypoints: list of all trajectory points to reach - the following road - format: [x_points, y_points, heading, speed] - x_points: list of x_values (m) - y_points: list of y_values (m) - heading: list of yaw values (rad) - speed: list of speed limitations (m/s) + """Interpolate over a complete road + :param road_id: id value of the current road + :return: waypoints: list of all trajectory points to reach + the following road + format: [x_points, y_points, heading, speed] + x_points: list of x_values (m) + y_points: list of y_values (m) + heading: list of yaw values (rad) + speed: list of speed limitations (m/s) """ x = list() y = list() @@ -1114,18 +1182,16 @@ def interpolation(self, road_id: int): yd = length * sin(hdg) end = help_functions.add_vector(start, (xd, yd)) points = help_functions.linear_interpolation( - start=start, - end=end, - interval_m=INTERVALL) + start=start, end=end, interval_m=INTERVALL + ) else: radius = self.geometry_data[road_id][3][i] end = help_functions.end_of_circular_arc( - start_point=start, angle=hdg, - length=length, radius=radius) + start_point=start, angle=hdg, length=length, radius=radius + ) points = help_functions.circular_interpolation( - start=start, - end=end, - arc_radius=radius) + start=start, end=end, arc_radius=radius + ) for j in range(len(points)): x.append(points[j][0]) y.append(points[j][1]) @@ -1137,7 +1203,7 @@ def interpolation(self, road_id: int): return points def remove_outliner(self, points): - """ Function checks if distance between two following trajectory + """Function checks if distance between two following trajectory points is to small or to big. Delete and update waypoints if necessary. :param points: points: list of all trajectory points format: [x_points, y_points, heading, speed] @@ -1154,10 +1220,10 @@ def remove_outliner(self, points): dist = help_functions.euclid_dist(p, p_next) # point is to close to the following point (0.5m) if dist < 0.5: - delete_index.append(i+1) + delete_index.append(i + 1) # outliner point elif dist > 3: - delete_index.append(i+1) + delete_index.append(i + 1) # delete the points with the calculated indices number = 0 for i in delete_index: @@ -1170,11 +1236,11 @@ def remove_outliner(self, points): return points def get_endpoints(self, road_id: int): - """ Calculate the startpoint and endpoint of a given road - :param road_id: the road id of the examined road - :return: start_point, end_point - start_point: x and y coordinate of the starting point - end_point: x and y coordinate of the ending point + """Calculate the startpoint and endpoint of a given road + :param road_id: the road id of the examined road + :return: start_point, end_point + start_point: x and y coordinate of the starting point + end_point: x and y coordinate of the ending point """ size = len(self.geometry_data[road_id][0]) x_start = self.geometry_data[road_id][0][0] @@ -1190,7 +1256,7 @@ def get_endpoints(self, road_id: int): last_start = (x, y) # check the last curvature value to see if it is line or arc - if self.geometry_data[road_id][3][size-1] == LINE: + if self.geometry_data[road_id][3][size - 1] == LINE: xd = length * cos(hdg) yd = length * sin(hdg) # subtract a small value due to inaccuracy @@ -1204,17 +1270,21 @@ def get_endpoints(self, road_id: int): yd += 0.05 end_point = help_functions.add_vector(last_start, (xd, yd)) else: - radius = self.geometry_data[road_id][3][size-1] + radius = self.geometry_data[road_id][3][size - 1] end_point = help_functions.end_of_circular_arc( - start_point=last_start, angle=hdg, - length=length, radius=radius) + start_point=last_start, angle=hdg, length=length, radius=radius + ) return start_point, end_point - def get_initial_next_road_id(self, predecessor: int, - successor: int, - x_target: float, y_target: float, - yaw: int): - """ Find the next road to drive + def get_initial_next_road_id( + self, + predecessor: int, + successor: int, + x_target: float, + y_target: float, + yaw: int, + ): + """Find the next road to drive When the agent starts driving it is not sure if he has to follow his successor or his predecessor. This function calculates the next road id, based on the dist to the target point. The road, who is nearer to @@ -1239,16 +1309,14 @@ def get_initial_next_road_id(self, predecessor: int, final_id = predecessor else: min_distances = list() - x_road_p, y_road_p, pred = self.\ - get_next_road_point(predecessor, yaw) + x_road_p, y_road_p, pred = self.get_next_road_point(predecessor, yaw) point1, point2 = self.get_endpoints(pred) dist1 = help_functions.euclid_dist(point1, target) min_distances.append(dist1) dist2 = help_functions.euclid_dist(point2, target) min_distances.append(dist2) - x_road_s, y_road_s, succ = self.\ - get_next_road_point(successor, yaw) + x_road_s, y_road_s, succ = self.get_next_road_point(successor, yaw) point3, point4 = self.get_endpoints(succ) dist3 = help_functions.euclid_dist(point3, target) min_distances.append(dist3) @@ -1266,7 +1334,7 @@ def get_initial_next_road_id(self, predecessor: int, return final_id, section_id def get_pred_succ(self, road_id: int): - """ Find the predecessor and the successor road of the current road + """Find the predecessor and the successor road of the current road If there is only a successor or only a predecessor, this function handles these cases :param road_id: id of the current road @@ -1279,7 +1347,7 @@ def get_pred_succ(self, road_id: int): curr_road = self.roads[road_id] link = curr_road.find("link") # Road needs a successor or predecessor - assert (len(link) > 0) + assert len(link) > 0 # if only one following road if len(link) == 1: next_road_id = link[0].get("elementId") @@ -1293,20 +1361,20 @@ def get_pred_succ(self, road_id: int): # predecessor and successor -> only choose which direction # to drive else: - predecessor = int(curr_road.find("link").find("predecessor"). - get("elementId")) - successor = int(curr_road.find("link").find("successor"). - get("elementId")) + predecessor = int( + curr_road.find("link").find("predecessor").get("elementId") + ) + successor = int(curr_road.find("link").find("successor").get("elementId")) return predecessor, successor def get_next_road_point(self, road_id: int, yaw: int): - """ The function returns the x and y coordinate for a given road - :param road_id: the id value of the preferred road - :param yaw: yaw value of the agent - :return: x, y, road_id - x: value of the x coordinate - y: value of the y coordinate - road_id: id of the chosen road + """The function returns the x and y coordinate for a given road + :param road_id: the id value of the preferred road + :param yaw: yaw value of the agent + :return: x, y, road_id + x: value of the x coordinate + y: value of the y coordinate + road_id: id of the chosen road """ line_list = list() # check if it is a junction diff --git a/code/planning/src/local_planner/ACC.py b/code/planning/src/local_planner/ACC.py index 0c013364..74b681b2 100755 --- a/code/planning/src/local_planner/ACC.py +++ b/code/planning/src/local_planner/ACC.py @@ -3,7 +3,7 @@ from ros_compatibility.node import CompatibleNode from rospy import Subscriber, Publisher from geometry_msgs.msg import PoseStamped -from carla_msgs.msg import CarlaSpeedometer # , CarlaWorldInfo +from carla_msgs.msg import CarlaSpeedometer # , CarlaWorldInfo from nav_msgs.msg import Path from std_msgs.msg import Float32MultiArray, Float32, Bool import numpy as np @@ -16,7 +16,7 @@ class ACC(CompatibleNode): """ def __init__(self): - super(ACC, self).__init__('ACC') + super(ACC, self).__init__("ACC") self.role_name = self.get_param("role_name", "hero") self.control_loop_rate = self.get_param("control_loop_rate", 1) @@ -25,63 +25,67 @@ def __init__(self): Bool, f"/paf/{self.role_name}/unstuck_flag", self.__get_unstuck_flag, - qos_profile=1) + qos_profile=1, + ) self.unstuck_distance_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/unstuck_distance", self.__get_unstuck_distance, - qos_profile=1) + qos_profile=1, + ) # Get current speed self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_current_velocity, - qos_profile=1) + qos_profile=1, + ) # Get initial set of speed limits from global planner self.speed_limit_OD_sub: Subscriber = self.new_subscription( Float32MultiArray, f"/paf/{self.role_name}/speed_limits_OpenDrive", self.__set_speed_limits_opendrive, - qos_profile=1) + qos_profile=1, + ) # Get trajectory to determine current speed limit self.trajectory_sub: Subscriber = self.new_subscription( Path, f"/paf/{self.role_name}/trajectory_global", self.__set_trajectory, - qos_profile=1) + qos_profile=1, + ) # Get current position to determine current waypoint self.pose_sub: Subscriber = self.new_subscription( msg_type=PoseStamped, topic="/paf/" + self.role_name + "/current_pos", callback=self.__current_position_callback, - qos_profile=1) + qos_profile=1, + ) # Get approximated speed from obstacle in front self.approx_speed_sub = self.new_subscription( Float32MultiArray, f"/paf/{self.role_name}/collision", self.__collision_callback, - qos_profile=1) + qos_profile=1, + ) # Publish desired speed to acting self.velocity_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/acc_velocity", - qos_profile=1) + Float32, f"/paf/{self.role_name}/acc_velocity", qos_profile=1 + ) # Publish current waypoint and speed limit self.wp_publisher: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/current_wp", - qos_profile=1) + Float32, f"/paf/{self.role_name}/current_wp", qos_profile=1 + ) self.speed_limit_publisher: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/speed_limit", - qos_profile=1) + Float32, f"/paf/{self.role_name}/speed_limit", qos_profile=1 + ) # unstuck attributes self.__unstuck_flag: bool = False @@ -172,11 +176,9 @@ def __current_position_callback(self, data: PoseStamped): agent = data.pose.position # Get current waypoint - current_wp = self.__trajectory.poses[self.__current_wp_index].\ - pose.position + current_wp = self.__trajectory.poses[self.__current_wp_index].pose.position # Get next waypoint - next_wp = self.__trajectory.poses[self.__current_wp_index + 1].\ - pose.position + next_wp = self.__trajectory.poses[self.__current_wp_index + 1].pose.position # distances from agent to current and next waypoint d_old = abs(agent.x - current_wp.x) + abs(agent.y - current_wp.y) d_new = abs(agent.x - next_wp.x) + abs(agent.y - next_wp.y) @@ -185,19 +187,16 @@ def __current_position_callback(self, data: PoseStamped): # update current waypoint and corresponding speed limit self.__current_wp_index += 1 self.wp_publisher.publish(self.__current_wp_index) - self.speed_limit = \ - self.__speed_limits_OD[self.__current_wp_index] + self.speed_limit = self.__speed_limits_OD[self.__current_wp_index] self.speed_limit_publisher.publish(self.speed_limit) # in case we used the unstuck routine to drive backwards # we have to follow WPs that are already passed elif self.__unstuck_flag: - if self.__unstuck_distance is None\ - or self.__unstuck_distance == -1: + if self.__unstuck_distance is None or self.__unstuck_distance == -1: return self.__current_wp_index -= int(self.__unstuck_distance) self.wp_publisher.publish(self.__current_wp_index) - self.speed_limit = \ - self.__speed_limits_OD[self.__current_wp_index] + self.speed_limit = self.__speed_limits_OD[self.__current_wp_index] self.speed_limit_publisher.publish(self.speed_limit) def run(self): @@ -205,27 +204,31 @@ def run(self): Control loop :return: """ + def loop(timer_event=None): """ Permanent checks if distance to a possible object is too small and publishes the desired speed to motion planning """ - if self.obstacle_distance is not None and \ - self.obstacle_speed is not None and \ - self.__current_velocity is not None: + if ( + self.obstacle_distance is not None + and self.obstacle_speed is not None + and self.__current_velocity is not None + ): # If we have obstalce information, # we can calculate the safe speed safety_distance = calculate_rule_of_thumb( - False, self.__current_velocity) + False, self.__current_velocity + ) if self.obstacle_distance < safety_distance: # If safety distance is reached, we want to reduce the # speed to meet the desired distance # https://encyclopediaofmath.org/index.php?title=Linear_interpolation - safe_speed = self.obstacle_speed * \ - (self.obstacle_distance / safety_distance) + safe_speed = self.obstacle_speed * ( + self.obstacle_distance / safety_distance + ) # Interpolate speed for smoother braking - safe_speed = interpolate_speed(safe_speed, - self.__current_velocity) + safe_speed = interpolate_speed(safe_speed, self.__current_velocity) if safe_speed < 1.0: safe_speed = 0 self.velocity_pub.publish(safe_speed) @@ -255,7 +258,7 @@ def loop(timer_event=None): main function starts the ACC node :param args: """ - roscomp.init('ACC') + roscomp.init("ACC") try: node = ACC() diff --git a/code/planning/src/local_planner/collision_check.py b/code/planning/src/local_planner/collision_check.py index f9550717..70f0946a 100755 --- a/code/planning/src/local_planner/collision_check.py +++ b/code/planning/src/local_planner/collision_check.py @@ -2,12 +2,13 @@ # import rospy import numpy as np import rospy + # import tf.transformations import ros_compatibility as roscomp from ros_compatibility.node import CompatibleNode from rospy import Subscriber -from carla_msgs.msg import CarlaSpeedometer # , CarlaWorldInfo +from carla_msgs.msg import CarlaSpeedometer # , CarlaWorldInfo from std_msgs.msg import Float32, Float32MultiArray from std_msgs.msg import Bool @@ -21,7 +22,7 @@ class CollisionCheck(CompatibleNode): """ def __init__(self): - super(CollisionCheck, self).__init__('CollisionCheck') + super(CollisionCheck, self).__init__("CollisionCheck") self.role_name = self.get_param("role_name", "hero") self.control_loop_rate = self.get_param("control_loop_rate", 1) # Subscriber for current speed @@ -29,28 +30,27 @@ def __init__(self): CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_current_velocity, - qos_profile=1) + qos_profile=1, + ) # Subscriber for lidar objects self.lidar_dist = self.new_subscription( Float32MultiArray, f"/paf/{self.role_name}/Center/object_distance", self.__set_all_distances, - qos_profile=1) + qos_profile=1, + ) # Publisher for emergency stop self.emergency_pub = self.new_publisher( - Bool, - f"/paf/{self.role_name}/emergency", - qos_profile=1) + Bool, f"/paf/{self.role_name}/emergency", qos_profile=1 + ) # Publisher for distance to collision self.collision_pub = self.new_publisher( - Float32MultiArray, - f"/paf/{self.role_name}/collision", - qos_profile=1) + Float32MultiArray, f"/paf/{self.role_name}/collision", qos_profile=1 + ) # Publisher for distance to oncoming traffic self.oncoming_pub = self.new_publisher( - Float32, - f"/paf/{self.role_name}/oncoming", - qos_profile=1) + Float32, f"/paf/{self.role_name}/oncoming", qos_profile=1 + ) # Variables to save vehicle data self.__current_velocity: float = None self.__object_first_position: tuple = None @@ -103,10 +103,11 @@ def __set_distance(self, data: Float32MultiArray): """ # Filter onjects in front nearest_object = filter_vision_objects(data.data, False) - if nearest_object is None and \ - self.__object_last_position is not None and \ - rospy.get_rostime() - self.__object_last_position[0] > \ - rospy.Duration(2): + if ( + nearest_object is None + and self.__object_last_position is not None + and rospy.get_rostime() - self.__object_last_position[0] > rospy.Duration(2) + ): # If no object is in front and last object is older than 2 seconds # we assume no object is in front self.update_distance(True) @@ -129,10 +130,12 @@ def __set_distance_oncoming(self, data: Float32MultiArray): """ # Filter for oncoming traffic objects nearest_object = filter_vision_objects(data.data, True) - if (nearest_object is None and - self.__last_position_oncoming is not None and - rospy.get_rostime() - self.__last_position_oncoming[0] > - rospy.Duration(2)): + if ( + nearest_object is None + and self.__last_position_oncoming is not None + and rospy.get_rostime() - self.__last_position_oncoming[0] + > rospy.Duration(2) + ): # If no oncoming traffic found and last object is older than 2 # seconds we assume no object is in front self.update_distance_oncoming(True) @@ -141,8 +144,7 @@ def __set_distance_oncoming(self, data: Float32MultiArray): # If no oncoming traffic abort return - self.__last_position_oncoming = \ - (rospy.get_rostime(), nearest_object[1]) + self.__last_position_oncoming = (rospy.get_rostime(), nearest_object[1]) # Update oncoming traffic distance if this is first object since reset self.update_distance_oncoming(False) # Publish oncoming traffic to Decision Making @@ -170,26 +172,28 @@ def update_distance_oncoming(self, reset): def calculate_obstacle_speed(self): """Caluclate the speed of the obstacle in front of the ego vehicle - based on the distance between to timestamps. - Then check for collision + based on the distance between to timestamps. + Then check for collision """ # Check if current speed from vehicle is not None - if self.__current_velocity is None or \ - self.__object_first_position is None or \ - self.__object_last_position is None: + if ( + self.__current_velocity is None + or self.__object_first_position is None + or self.__object_last_position is None + ): return # Calculate time since last position update - rospy_time_difference = self.__object_last_position[0] - \ - self.__object_first_position[0] + rospy_time_difference = ( + self.__object_last_position[0] - self.__object_first_position[0] + ) # Use nanoseconds for time difference to be more accurate # and reduce error - time_difference = rospy_time_difference.nsecs/1e9 + time_difference = rospy_time_difference.nsecs / 1e9 # Calculate distance (in m) - distance = self.__object_last_position[1] - \ - self.__object_first_position[1] + distance = self.__object_last_position[1] - self.__object_first_position[1] try: # Speed difference is distance/time (m/s) - relative_speed = distance/time_difference + relative_speed = distance / time_difference except ZeroDivisionError: # If time difference is 0, we cannot calculate speed return @@ -204,7 +208,10 @@ def calculate_obstacle_speed(self): # Update first position to calculate speed when next object is detected self.__object_first_position = self.__object_last_position - def __get_current_velocity(self, data: CarlaSpeedometer,): + def __get_current_velocity( + self, + data: CarlaSpeedometer, + ): """Saves current velocity of the ego vehicle Args: @@ -228,7 +235,7 @@ def time_to_collision(self, obstacle_speed, distance): return distance / (self.__current_velocity - obstacle_speed) def check_crash(self, obstacle): - """ Checks if and when the ego vehicle will crash + """Checks if and when the ego vehicle will crash with the obstacle in front Args: @@ -239,8 +246,7 @@ def check_crash(self, obstacle): collision_time = self.time_to_collision(obstacle_speed, distance) # Calculate emergency distance based on current speed - emergency_distance = calculate_rule_of_thumb( - True, self.__current_velocity) + emergency_distance = calculate_rule_of_thumb(True, self.__current_velocity) if collision_time > 0: # If time to collision is positive, a collision is ahead if distance < emergency_distance: @@ -268,7 +274,7 @@ def run(self): main function starts the CollisionCheck node :param args: """ - roscomp.init('CollisionCheck') + roscomp.init("CollisionCheck") try: node = CollisionCheck() diff --git a/code/planning/src/local_planner/motion_planning.py b/code/planning/src/local_planner/motion_planning.py index d7448df9..61c3a4e3 100755 --- a/code/planning/src/local_planner/motion_planning.py +++ b/code/planning/src/local_planner/motion_planning.py @@ -17,8 +17,7 @@ import planning # noqa: F401 from behavior_agent.behaviours import behavior_speed as bs -from utils import convert_to_ms, spawn_car, NUM_WAYPOINTS, \ - TARGET_DISTANCE_TO_STOP +from utils import convert_to_ms, spawn_car, NUM_WAYPOINTS, TARGET_DISTANCE_TO_STOP # from scipy.spatial._kdtree import KDTree @@ -35,7 +34,7 @@ class MotionPlanning(CompatibleNode): """ def __init__(self): - super(MotionPlanning, self).__init__('MotionPlanning') + super(MotionPlanning, self).__init__("MotionPlanning") self.role_name = self.get_param("role_name", "hero") self.control_loop_rate = self.get_param("control_loop_rate", 0.05) @@ -65,101 +64,107 @@ def __init__(self): self.init_overtake_pos = None # Subscriber self.test_sub = self.new_subscription( - Float32, - f"/paf/{self.role_name}/spawn_car", - spawn_car, - qos_profile=1) + Float32, f"/paf/{self.role_name}/spawn_car", spawn_car, qos_profile=1 + ) self.speed_limit_sub = self.new_subscription( Float32, f"/paf/{self.role_name}/speed_limit", self.__set_speed_limit, - qos_profile=1) + qos_profile=1, + ) self.velocity_sub: Subscriber = self.new_subscription( CarlaSpeedometer, f"/carla/{self.role_name}/Speed", self.__get_current_velocity, - qos_profile=1) + qos_profile=1, + ) self.head_sub = self.new_subscription( Float32, f"/paf/{self.role_name}/current_heading", self.__set_heading, - qos_profile=1) + qos_profile=1, + ) self.trajectory_sub = self.new_subscription( Path, f"/paf/{self.role_name}/trajectory_global", self.__set_trajectory, - qos_profile=1) + qos_profile=1, + ) self.current_pos_sub = self.new_subscription( PoseStamped, f"/paf/{self.role_name}/current_pos", self.__set_current_pos, - qos_profile=1) + qos_profile=1, + ) self.curr_behavior_sub: Subscriber = self.new_subscription( String, f"/paf/{self.role_name}/curr_behavior", self.__set_curr_behavior, - qos_profile=1) + qos_profile=1, + ) self.emergency_sub: Subscriber = self.new_subscription( Bool, f"/paf/{self.role_name}/unchecked_emergency", self.__check_emergency, - qos_profile=1) + qos_profile=1, + ) self.acc_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/acc_velocity", self.__set_acc_speed, - qos_profile=1) + qos_profile=1, + ) self.stopline_sub: Subscriber = self.new_subscription( Waypoint, f"/paf/{self.role_name}/waypoint_distance", self.__set_stopline, - qos_profile=1) + qos_profile=1, + ) self.change_point_sub: Subscriber = self.new_subscription( LaneChange, f"/paf/{self.role_name}/lane_change_distance", self.__set_change_point, - qos_profile=1) + qos_profile=1, + ) self.coll_point_sub: Subscriber = self.new_subscription( Float32MultiArray, f"/paf/{self.role_name}/collision", self.__set_collision_point, - qos_profile=1) + qos_profile=1, + ) self.traffic_y_sub: Subscriber = self.new_subscription( Int16, f"/paf/{self.role_name}/Center/traffic_light_y_distance", self.__set_traffic_y_distance, - qos_profile=1) + qos_profile=1, + ) self.unstuck_distance_sub: Subscriber = self.new_subscription( Float32, f"/paf/{self.role_name}/unstuck_distance", self.__set_unstuck_distance, - qos_profile=1) + qos_profile=1, + ) # Publisher self.traj_pub: Publisher = self.new_publisher( - msg_type=Path, - topic=f"/paf/{self.role_name}/trajectory", - qos_profile=1) + msg_type=Path, topic=f"/paf/{self.role_name}/trajectory", qos_profile=1 + ) self.velocity_pub: Publisher = self.new_publisher( - Float32, - f"/paf/{self.role_name}/target_velocity", - qos_profile=1) + Float32, f"/paf/{self.role_name}/target_velocity", qos_profile=1 + ) self.wp_subs = self.new_subscription( - Float32, - f"/paf/{self.role_name}/current_wp", - self.__set_wp, - qos_profile=1) + Float32, f"/paf/{self.role_name}/current_wp", self.__set_wp, qos_profile=1 + ) self.overtake_success_pub = self.new_publisher( - Float32, - f"/paf/{self.role_name}/overtake_success", - qos_profile=1) + Float32, f"/paf/{self.role_name}/overtake_success", qos_profile=1 + ) self.logdebug("MotionPlanning started") self.counter = 0 @@ -209,9 +214,9 @@ def __set_current_pos(self, data: PoseStamped): Args: data (PoseStamped): current position """ - self.current_pos = np.array([data.pose.position.x, - data.pose.position.y, - data.pose.position.z]) + self.current_pos = np.array( + [data.pose.position.x, data.pose.position.y, data.pose.position.z] + ) def __set_traffic_y_distance(self, data): if data is not None: @@ -237,20 +242,25 @@ def overtake_fallback(self, distance, pose_list, unstuck=False): normal_x_offset = 2 unstuck_x_offset = 3 # could need adjustment with better steering if unstuck: - selection = pose_list[int(currentwp)-2:int(currentwp) + - int(distance)+2 + NUM_WAYPOINTS] + selection = pose_list[ + int(currentwp) - 2 : int(currentwp) + int(distance) + 2 + NUM_WAYPOINTS + ] else: - selection = pose_list[int(currentwp) + int(distance/2): - int(currentwp) + - int(distance) + NUM_WAYPOINTS] + selection = pose_list[ + int(currentwp) + + int(distance / 2) : int(currentwp) + + int(distance) + + NUM_WAYPOINTS + ] waypoints = self.convert_pose_to_array(selection) if unstuck is True: offset = np.array([unstuck_x_offset, 0, 0]) else: offset = np.array([normal_x_offset, 0, 0]) - rotation_adjusted = Rotation.from_euler('z', self.current_heading + - math.radians(90)) + rotation_adjusted = Rotation.from_euler( + "z", self.current_heading + math.radians(90) + ) offset_front = rotation_adjusted.apply(offset) offset_front = offset_front[:2] waypoints_off = waypoints + offset_front @@ -261,8 +271,7 @@ def overtake_fallback(self, distance, pose_list, unstuck=False): result = [] for i in range(len(result_x)): position = Point(result_x[i], result_y[i], 0) - orientation = Quaternion(x=0, y=0, - z=0, w=0) + orientation = Quaternion(x=0, y=0, z=0, w=0) pose = Pose(position, orientation) pos = PoseStamped() pos.header.frame_id = "global" @@ -272,12 +281,17 @@ def overtake_fallback(self, distance, pose_list, unstuck=False): path.header.stamp = rospy.Time.now() path.header.frame_id = "global" if unstuck: - path.poses = pose_list[:int(currentwp)-2] + \ - result + pose_list[int(currentwp) + - int(distance) + 2 + NUM_WAYPOINTS:] + path.poses = ( + pose_list[: int(currentwp) - 2] + + result + + pose_list[int(currentwp) + int(distance) + 2 + NUM_WAYPOINTS :] + ) else: - path.poses = pose_list[:int(currentwp) + int(distance/2)] + \ - result + pose_list[int(currentwp + distance + NUM_WAYPOINTS):] + path.poses = ( + pose_list[: int(currentwp) + int(distance / 2)] + + result + + pose_list[int(currentwp + distance + NUM_WAYPOINTS) :] + ) self.trajectory = path @@ -389,8 +403,9 @@ def convert_pose_to_array(poses: np.array): """ result_array = np.empty((len(poses), 2)) for pose in range(len(poses)): - result_array[pose] = np.array([poses[pose].pose.position.x, - poses[pose].pose.position.y]) + result_array[pose] = np.array( + [poses[pose].pose.position.x, poses[pose].pose.position.y] + ) return result_array def __check_emergency(self, data: Bool): @@ -435,8 +450,7 @@ def __set_stopline(self, data: Waypoint) -> float: def __set_change_point(self, data: LaneChange): if data is not None: - self.__change_point = \ - (data.distance, data.isLaneChange, data.roadOption) + self.__change_point = (data.distance, data.isLaneChange, data.roadOption) def __set_collision_point(self, data: Float32MultiArray): if data.data is not None: @@ -475,10 +489,10 @@ def __get_speed_unstuck(self, behavior: str) -> float: self.logfatal("Unstuck distance not set") return speed - if self.init_overtake_pos is not None \ - and self.current_pos is not None: + if self.init_overtake_pos is not None and self.current_pos is not None: distance = np.linalg.norm( - self.init_overtake_pos[:2] - self.current_pos[:2]) + self.init_overtake_pos[:2] - self.current_pos[:2] + ) # self.logfatal(f"Unstuck Distance in mp: {distance}") # clear distance to last unstuck -> avoid spamming overtake if distance > UNSTUCK_OVERTAKE_FLAG_CLEAR_DISTANCE: @@ -490,8 +504,7 @@ def __get_speed_unstuck(self, behavior: str) -> float: # create overtake trajectory starting 6 meteres before # the obstacle # 6 worked well in tests, but can be adjusted - self.overtake_fallback(self.unstuck_distance, pose_list, - unstuck=True) + self.overtake_fallback(self.unstuck_distance, pose_list, unstuck=True) self.logfatal("Overtake Trajectory while unstuck!") self.unstuck_overtake_flag = True self.init_overtake_pos = self.current_pos[:2] @@ -550,7 +563,7 @@ def __get_speed_overtake(self, behavior: str) -> float: elif behavior == bs.ot_enter_slow.name: speed = self.__calc_speed_to_stop_overtake() elif behavior == bs.ot_leave.name: - speed = convert_to_ms(30.) + speed = convert_to_ms(30.0) return speed def __get_speed_cruise(self) -> float: @@ -561,8 +574,7 @@ def __calc_speed_to_stop_intersection(self) -> float: stopline = self.__calc_virtual_stopline() # calculate speed needed for stopping - v_stop = max(convert_to_ms(10.), - convert_to_ms(stopline / 0.8)) + v_stop = max(convert_to_ms(10.0), convert_to_ms(stopline / 0.8)) if v_stop > bs.int_app_init.speed: v_stop = bs.int_app_init.speed if stopline < target_distance: @@ -572,8 +584,7 @@ def __calc_speed_to_stop_intersection(self) -> float: def __calc_speed_to_stop_lanechange(self) -> float: stopline = self.__calc_virtual_change_point() - v_stop = max(convert_to_ms(10.), - convert_to_ms(stopline / 0.8)) + v_stop = max(convert_to_ms(10.0), convert_to_ms(stopline / 0.8)) if v_stop > bs.lc_app_init.speed: v_stop = bs.lc_app_init.speed if stopline < TARGET_DISTANCE_TO_STOP: @@ -582,8 +593,7 @@ def __calc_speed_to_stop_lanechange(self) -> float: def __calc_speed_to_stop_overtake(self) -> float: stopline = self.__calc_virtual_overtake() - v_stop = max(convert_to_ms(10.), - convert_to_ms(stopline / 0.8)) + v_stop = max(convert_to_ms(10.0), convert_to_ms(stopline / 0.8)) if stopline < TARGET_DISTANCE_TO_STOP: v_stop = 0.0 @@ -608,8 +618,7 @@ def __calc_virtual_stopline(self) -> float: return 0.0 def __calc_virtual_overtake(self) -> float: - if (self.__collision_point is not None) and \ - self.__collision_point != np.inf: + if (self.__collision_point is not None) and self.__collision_point != np.inf: return self.__collision_point else: return 0.0 @@ -621,15 +630,17 @@ def run(self): """ def loop(timer_event=None): - if (self.__curr_behavior is not None and - self.__acc_speed is not None and - self.__corners is not None): + if ( + self.__curr_behavior is not None + and self.__acc_speed is not None + and self.__corners is not None + ): self.trajectory.header.stamp = rospy.Time.now() self.traj_pub.publish(self.trajectory) - self.update_target_speed(self.__acc_speed, - self.__curr_behavior) + self.update_target_speed(self.__acc_speed, self.__curr_behavior) else: self.velocity_pub.publish(0.0) + self.new_timer(self.control_loop_rate, loop) self.spin() @@ -639,7 +650,7 @@ def loop(timer_event=None): main function starts the MotionPlanning node :param args: """ - roscomp.init('MotionPlanning') + roscomp.init("MotionPlanning") try: node = MotionPlanning() node.run() diff --git a/code/planning/src/local_planner/utils.py b/code/planning/src/local_planner/utils.py index 63cf5600..d976506d 100644 --- a/code/planning/src/local_planner/utils.py +++ b/code/planning/src/local_planner/utils.py @@ -3,6 +3,7 @@ import math import carla import os + # import rospy @@ -56,17 +57,19 @@ def location_to_gps(lat_ref: float, lon_ref: float, x: float, y: float): scale = math.cos(lat_ref * math.pi / 180.0) mx = scale * lon_ref * math.pi * EARTH_RADIUS_EQUA / 180.0 - my = scale * EARTH_RADIUS_EQUA * math.log(math.tan((90.0 + lat_ref) * - math.pi / 360.0)) + my = ( + scale + * EARTH_RADIUS_EQUA + * math.log(math.tan((90.0 + lat_ref) * math.pi / 360.0)) + ) mx += x my -= y lon = mx * 180.0 / (math.pi * EARTH_RADIUS_EQUA * scale) - lat = 360.0 * math.atan(math.exp(my / (EARTH_RADIUS_EQUA * scale))) /\ - math.pi - 90.0 + lat = 360.0 * math.atan(math.exp(my / (EARTH_RADIUS_EQUA * scale))) / math.pi - 90.0 z = 703 - return {'lat': lat, 'lon': lon, 'z': z} + return {"lat": lat, "lon": lon, "z": z} def calculate_rule_of_thumb(emergency, speed): @@ -81,7 +84,7 @@ def calculate_rule_of_thumb(emergency, speed): float: distance calculated with rule of thumb """ reaction_distance = speed - braking_distance = (speed * 0.36)**2 + braking_distance = (speed * 0.36) ** 2 if emergency: # Emergency brake is really effective in Carla return reaction_distance + braking_distance / 2 @@ -89,8 +92,9 @@ def calculate_rule_of_thumb(emergency, speed): return reaction_distance + braking_distance -def approx_obstacle_pos(distance: float, heading: float, - ego_pos: np.array, speed: float): +def approx_obstacle_pos( + distance: float, heading: float, ego_pos: np.array, speed: float +): """calculate the position of the obstacle in the global coordinate system based on ego position, heading and distance @@ -103,7 +107,7 @@ def approx_obstacle_pos(distance: float, heading: float, Returns: np.array: approximated position of the obstacle """ - rotation_matrix = Rotation.from_euler('z', heading) + rotation_matrix = Rotation.from_euler("z", heading) # Create distance vector with 0 rotation relative_position_local = np.array([distance, 0, 0]) @@ -120,18 +124,18 @@ def approx_obstacle_pos(distance: float, heading: float, # calculate the front left corner of the vehicle offset = np.array([1, 0, 0]) - rotation_adjusted = Rotation.from_euler('z', heading + math.radians(90)) + rotation_adjusted = Rotation.from_euler("z", heading + math.radians(90)) offset_front = rotation_adjusted.apply(offset) # calculate back right corner of the vehicle - rotation_adjusted = Rotation.from_euler('z', heading + math.radians(270)) + rotation_adjusted = Rotation.from_euler("z", heading + math.radians(270)) offset_back = rotation_adjusted.apply(offset) - vehicle_position_global_end = vehicle_position_global_start + \ - length_vector + offset_back + vehicle_position_global_end = ( + vehicle_position_global_start + length_vector + offset_back + ) - return vehicle_position_global_start + offset_front, \ - vehicle_position_global_end + return vehicle_position_global_start + offset_front, vehicle_position_global_end def convert_to_ms(speed: float): @@ -152,8 +156,8 @@ def spawn_car(distance): Args: distance (float): distance """ - CARLA_HOST = os.environ.get('CARLA_HOST', 'paf-carla-simulator-1') - CARLA_PORT = int(os.environ.get('CARLA_PORT', '2000')) + CARLA_HOST = os.environ.get("CARLA_HOST", "paf-carla-simulator-1") + CARLA_PORT = int(os.environ.get("CARLA_PORT", "2000")) client = carla.Client(CARLA_HOST, CARLA_PORT) @@ -165,13 +169,14 @@ def spawn_car(distance): # vehicle = world.spawn_actor(bp, world.get_map().get_spawn_points()[0]) bp = blueprint_library.filter("model3")[0] for actor in world.get_actors(): - if actor.attributes.get('role_name') == "hero": + if actor.attributes.get("role_name") == "hero": ego_vehicle = actor break - spawnPoint = carla.Transform(ego_vehicle.get_location() + - carla.Location(y=distance.data), - ego_vehicle.get_transform().rotation) + spawnPoint = carla.Transform( + ego_vehicle.get_location() + carla.Location(y=distance.data), + ego_vehicle.get_transform().rotation, + ) vehicle = world.spawn_actor(bp, spawnPoint) vehicle.set_autopilot(False) @@ -204,7 +209,7 @@ def filter_vision_objects(float_array, oncoming): # Reshape array to 3 columns and n rows (one row per object) float_array = np.asarray(float_array) - float_array = np.reshape(float_array, (float_array.size//3, 3)) + float_array = np.reshape(float_array, (float_array.size // 3, 3)) # Filter all rows that contain np.inf float_array = float_array[~np.any(np.isinf(float_array), axis=1), :] if float_array.size == 0: @@ -217,8 +222,7 @@ def filter_vision_objects(float_array, oncoming): # Get cars that are on our lane if oncoming: - cars_in_front = \ - all_cars[np.where(all_cars[:, 2] > 0.3)] + cars_in_front = all_cars[np.where(all_cars[:, 2] > 0.3)] if cars_in_front.size != 0: cars_in_front = cars_in_front[np.where(cars_in_front[:, 2] < 1.3)] else: diff --git a/code/test-route/src/test_route.py b/code/test-route/src/test_route.py index 8d0e59a2..3e699a35 100755 --- a/code/test-route/src/test_route.py +++ b/code/test-route/src/test_route.py @@ -4,6 +4,7 @@ import ros_compatibility as roscomp from ros_compatibility.node import CompatibleNode import carla + # from carla import command import rospy import random @@ -11,17 +12,18 @@ class TestRoute(CompatibleNode): def __init__(self): - super(TestRoute, self).__init__('testRoute') + super(TestRoute, self).__init__("testRoute") - self.control_loop_rate = self.get_param('control_loop_rate', 0.025) - self.role_name = self.get_param('role_name', 'ego_vehicle') - self.follow_hero = self.get_param('follow_hero', True) - self.vehicle_number = self.get_param('vehicle_number', 50) - self.only_cars = self.get_param('only_cars', False) - self.disable_vehicle_lane_change = \ - self.get_param('disable_vehicle_lane_change', False) + self.control_loop_rate = self.get_param("control_loop_rate", 0.025) + self.role_name = self.get_param("role_name", "ego_vehicle") + self.follow_hero = self.get_param("follow_hero", True) + self.vehicle_number = self.get_param("vehicle_number", 50) + self.only_cars = self.get_param("only_cars", False) + self.disable_vehicle_lane_change = self.get_param( + "disable_vehicle_lane_change", False + ) - host = os.environ.get('CARLA_SIM_HOST', 'localhost') + host = os.environ.get("CARLA_SIM_HOST", "localhost") self.client = carla.Client(host, 2000) self.client.set_timeout(60.0) @@ -49,16 +51,17 @@ def __init__(self): self.spectator = self.world.get_spectator() def spawn_traffic(self): - self.loginfo('Spawning traffic') + self.loginfo("Spawning traffic") spawn_points = self.world.get_map().get_spawn_points() hero_location = self.hero.get_location() spawn_points.sort(key=lambda x: x.location.distance(hero_location)) - blueprints = self.world.get_blueprint_library().filter('vehicle.*') + blueprints = self.world.get_blueprint_library().filter("vehicle.*") if self.only_cars: - blueprints = [b for b in blueprints - if int(b.get_attribute('number_of_wheels')) == 4] + blueprints = [ + b for b in blueprints if int(b.get_attribute("number_of_wheels")) == 4 + ] vehicles = [] max_vehicles = min([self.vehicle_number, len(spawn_points)]) @@ -70,17 +73,19 @@ def spawn_traffic(self): for _, transform in enumerate(spawn_points[:max_vehicles]): blueprint = random.choice(blueprints) - if blueprint.has_attribute('driver_id'): - driver_id = random.choice(blueprint.get_attribute('driver_id') - .recommended_values) - blueprint.set_attribute('driver_id', driver_id) + if blueprint.has_attribute("driver_id"): + driver_id = random.choice( + blueprint.get_attribute("driver_id").recommended_values + ) + blueprint.set_attribute("driver_id", driver_id) - if blueprint.has_attribute('color'): - color = random.choice(blueprint.get_attribute('color') - .recommended_values) - blueprint.set_attribute('color', color) + if blueprint.has_attribute("color"): + color = random.choice( + blueprint.get_attribute("color").recommended_values + ) + blueprint.set_attribute("color", color) - blueprint.set_attribute('role_name', 'autopilot') + blueprint.set_attribute("role_name", "autopilot") vehicle = self.world.try_spawn_actor(blueprint, transform) @@ -100,7 +105,7 @@ def spawn_traffic(self): # else: # vehicles.append(response.actor_id) - self.loginfo('Spawned {} vehicles'.format(len(vehicles))) + self.loginfo("Spawned {} vehicles".format(len(vehicles))) def wait_for_hero(self): while not rospy.is_shutdown(): @@ -108,9 +113,12 @@ def wait_for_hero(self): if not actors: continue - self.hero = [a for a in actors - if 'role_name' in a.attributes and - a.attributes.get('role_name') == self.role_name] + self.hero = [ + a + for a in actors + if "role_name" in a.attributes + and a.attributes.get("role_name") == self.role_name + ] if self.hero: self.hero = self.hero[0] break @@ -118,35 +126,35 @@ def wait_for_hero(self): def set_spectator(self, set_rotation=False): transform = self.hero.get_transform() location = carla.Location( - x=transform.location.x, - y=transform.location.y, - z=transform.location.z + 2) + x=transform.location.x, y=transform.location.y, z=transform.location.z + 2 + ) if set_rotation: self.spectator.set_transform( carla.Transform( - location, carla.Rotation( + location, + carla.Rotation( pitch=transform.rotation.pitch - 15, yaw=transform.rotation.yaw, - roll=transform.rotation.roll - ) + roll=transform.rotation.roll, + ), ) ) else: self.spectator.set_location(location) def run(self): - self.loginfo('Test-Route node running') + self.loginfo("Test-Route node running") self.set_spectator(set_rotation=True) def loop(timer_event=None): self.set_spectator() if self.follow_hero: - self.loginfo('Following hero') + self.loginfo("Following hero") self.new_timer(self.control_loop_rate, loop) else: - self.loginfo('Not following hero, setting spectator only once') + self.loginfo("Not following hero, setting spectator only once") self.set_spectator() sleep(5) @@ -156,7 +164,7 @@ def loop(timer_event=None): def main(args=None): - roscomp.init('testRoute', args=args) + roscomp.init("testRoute", args=args) try: node = TestRoute() @@ -167,5 +175,5 @@ def main(args=None): roscomp.shutdown() -if __name__ == '__main__': +if __name__ == "__main__": main() diff --git a/doc/development/templates/template_class.py b/doc/development/templates/template_class.py index 164c3585..5267d75c 100644 --- a/doc/development/templates/template_class.py +++ b/doc/development/templates/template_class.py @@ -28,6 +28,7 @@ # two blank lines between top level functions and class definition + ############################# # 3. Class-Defintion # ############################# diff --git a/doc/development/templates/template_class_no_comments.py b/doc/development/templates/template_class_no_comments.py index 13640a5b..365fdcfd 100644 --- a/doc/development/templates/template_class_no_comments.py +++ b/doc/development/templates/template_class_no_comments.py @@ -20,12 +20,12 @@ def test_function1(self, param1): def test_function2(cls): """ - + :return: """ pass - def test_function3(self): # inline comment + def test_function3(self): # inline comment # This is a block comment # It goes over multiple lines # All comments start with a blank space @@ -51,10 +51,10 @@ def test_function5(self, param1, param2): return param1 def main(self): - """_summary_ - """ + """_summary_""" print("Hello World") + if __name__ == "__main__": runner = TestClass() runner.main() diff --git a/doc/perception/experiments/object-detection-model_evaluation/globals.py b/doc/perception/experiments/object-detection-model_evaluation/globals.py index 5cb86c0a..325107dc 100644 --- a/doc/perception/experiments/object-detection-model_evaluation/globals.py +++ b/doc/perception/experiments/object-detection-model_evaluation/globals.py @@ -1,12 +1,12 @@ -IMAGE_BASE_FOLDER = '/home/maxi/paf/code/output/12-dev/rgb/center' +IMAGE_BASE_FOLDER = "/home/maxi/paf/code/output/12-dev/rgb/center" IMAGES_FOR_TEST = { - 'start': '1600.png', - 'intersection': '1619.png', - 'traffic_light': '1626.png', - 'traffic': '1660.png', - 'bicycle_far': '1663.png', - 'bicycle_close': '1668.png', - 'construction_sign_far': '2658.png', - 'construction_sign_close': '2769.png' + "start": "1600.png", + "intersection": "1619.png", + "traffic_light": "1626.png", + "traffic": "1660.png", + "bicycle_far": "1663.png", + "bicycle_close": "1668.png", + "construction_sign_far": "2658.png", + "construction_sign_close": "2769.png", } diff --git a/doc/perception/experiments/object-detection-model_evaluation/pt.py b/doc/perception/experiments/object-detection-model_evaluation/pt.py index 145fbcfe..29ec47d3 100644 --- a/doc/perception/experiments/object-detection-model_evaluation/pt.py +++ b/doc/perception/experiments/object-detection-model_evaluation/pt.py @@ -1,16 +1,16 @@ -''' +""" Docs: https://pytorch.org/vision/stable/models.html#object-detection -''' +""" import os from time import perf_counter import torch import torchvision -from torchvision.models.detection.faster_rcnn import \ - FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, \ - FasterRCNN_ResNet50_FPN_V2_Weights -from torchvision.models.detection.retinanet import \ - RetinaNet_ResNet50_FPN_V2_Weights +from torchvision.models.detection.faster_rcnn import ( + FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, + FasterRCNN_ResNet50_FPN_V2_Weights, +) +from torchvision.models.detection.retinanet import RetinaNet_ResNet50_FPN_V2_Weights from globals import IMAGE_BASE_FOLDER, IMAGES_FOR_TEST from torchvision.utils import draw_bounding_boxes from pathlib import Path @@ -20,28 +20,27 @@ from torchvision.transforms.functional import to_pil_image ALL_MODELS = { - 'fasterrcnn_mobilenet_v3_large_320_fpn': - FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, - 'fasterrcnn_resnet50_fpn_v2': FasterRCNN_ResNet50_FPN_V2_Weights, - 'retinanet_resnet50_fpn_v2': RetinaNet_ResNet50_FPN_V2_Weights, + "fasterrcnn_mobilenet_v3_large_320_fpn": FasterRCNN_MobileNet_V3_Large_320_FPN_Weights, + "fasterrcnn_resnet50_fpn_v2": FasterRCNN_ResNet50_FPN_V2_Weights, + "retinanet_resnet50_fpn_v2": RetinaNet_ResNet50_FPN_V2_Weights, } def load_model(model_name): - print('Selected model: ' + model_name) - print('Loading model...', end='') + print("Selected model: " + model_name) + print("Loading model...", end="") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") weights = ALL_MODELS[model_name].DEFAULT - model = torchvision.models.detection.__dict__[model_name]( - weights=weights - ).to(device) + model = torchvision.models.detection.__dict__[model_name](weights=weights).to( + device + ) model.eval() return model, weights, device def load_image(image_path, model_weights, device): img = Image.open(image_path) - img = img.convert('RGB') + img = img.convert("RGB") img = transforms.Compose([transforms.PILToTensor()])(img) img = model_weights.transforms()(img) img = img.unsqueeze_(0) @@ -60,11 +59,11 @@ def load_image(image_path, model_weights, device): image_np = load_image(image_path, weights, device) if first_gen: - print('Running warmup inference...') + print("Running warmup inference...") model(image_np) first_gen = False - print(f'Running inference for {p}... ') + print(f"Running inference for {p}... ") start_time = perf_counter() @@ -79,21 +78,20 @@ def load_image(image_path, model_weights, device): label_id_offset = -1 - image_np_with_detections = torch.tensor(image_np * 255, - dtype=torch.uint8) - boxes = result['boxes'] - scores = result['scores'] - labels = [weights.meta["categories"][i] for i in result['labels']] + image_np_with_detections = torch.tensor(image_np * 255, dtype=torch.uint8) + boxes = result["boxes"] + scores = result["scores"] + labels = [weights.meta["categories"][i] for i in result["labels"]] - box = draw_bounding_boxes(image_np_with_detections[0], boxes, labels, - colors='red', width=2) + box = draw_bounding_boxes( + image_np_with_detections[0], boxes, labels, colors="red", width=2 + ) box_img = to_pil_image(box) file_name = Path(image_path).stem plt.figure(figsize=(32, 18)) - plt.title(f'PyTorch - {m} - {p} - {elapsed_time*1000:.0f}ms', - fontsize=30) + plt.title(f"PyTorch - {m} - {p} - {elapsed_time*1000:.0f}ms", fontsize=30) plt.imshow(box_img) - plt.savefig(f'{IMAGE_BASE_FOLDER}/result/{file_name}_PT_{m}.jpg') + plt.savefig(f"{IMAGE_BASE_FOLDER}/result/{file_name}_PT_{m}.jpg") plt.close() diff --git a/doc/perception/experiments/object-detection-model_evaluation/pylot.py b/doc/perception/experiments/object-detection-model_evaluation/pylot.py index d59e5e75..19e2a3b1 100644 --- a/doc/perception/experiments/object-detection-model_evaluation/pylot.py +++ b/doc/perception/experiments/object-detection-model_evaluation/pylot.py @@ -1,7 +1,7 @@ -''' +""" Docs: https://www.tensorflow.org/hub/tutorials/tf2_object_detection, https://pylot.readthedocs.io/en/latest/perception.detection.html -''' +""" from globals import IMAGE_BASE_FOLDER, IMAGES_FOR_TEST @@ -20,55 +20,57 @@ from object_detection.utils import visualization_utils as viz_utils -matplotlib.use('TkAgg') +matplotlib.use("TkAgg") -tf.get_logger().setLevel('ERROR') +tf.get_logger().setLevel("ERROR") ALL_MODELS = [ - 'faster-rcnn', - 'ssdlite-mobilenet-v2', - 'ssd-mobilenet-fpn-640', - 'ssd-mobilenet-v1', - 'ssd-mobilenet-v1-fpn' + "faster-rcnn", + "ssdlite-mobilenet-v2", + "ssd-mobilenet-fpn-640", + "ssd-mobilenet-v1", + "ssd-mobilenet-v1-fpn", ] -MODEL_BASE_FOLDER = '/home/maxi/Downloads/models/obstacle_detection' +MODEL_BASE_FOLDER = "/home/maxi/Downloads/models/obstacle_detection" -LABEL_FILE = '/home/maxi/Downloads/pylot.names' +LABEL_FILE = "/home/maxi/Downloads/pylot.names" def load_image_into_numpy_array(path): - image_data = tf.io.gfile.GFile(path, 'rb').read() + image_data = tf.io.gfile.GFile(path, "rb").read() image = Image.open(BytesIO(image_data)) (im_width, im_height) = image.size - return np.array(image.convert('RGB').getdata()).reshape( - (1, im_height, im_width, 3)).astype(np.uint8) + return ( + np.array(image.convert("RGB").getdata()) + .reshape((1, im_height, im_width, 3)) + .astype(np.uint8) + ) def load_model(model_name): model_handle = os.path.join(MODEL_BASE_FOLDER, model_name) - print('Selected model: ' + model_name) + print("Selected model: " + model_name) - print('Loading model...', end='') + print("Loading model...", end="") hub_model = hub.load(model_handle) - print(' done!') + print(" done!") return hub_model def get_category_index(label_file): - with open(label_file, 'r') as f: + with open(label_file, "r") as f: labels = f.readlines() labels = [label.strip() for label in labels] - category_index = \ - {i: {'id': i, 'name': name} for i, name in enumerate(labels)} + category_index = {i: {"id": i, "name": name} for i, name in enumerate(labels)} return category_index -if not os.path.exists(f'{IMAGE_BASE_FOLDER}/result'): - os.makedirs(f'{IMAGE_BASE_FOLDER}/result') +if not os.path.exists(f"{IMAGE_BASE_FOLDER}/result"): + os.makedirs(f"{IMAGE_BASE_FOLDER}/result") category_index = get_category_index(LABEL_FILE) @@ -82,16 +84,16 @@ def get_category_index(label_file): image_tensor = tf.convert_to_tensor(image_np) if first_gen: - print('Running warmup inference...') - model.signatures['serving_default'](image_tensor) + print("Running warmup inference...") + model.signatures["serving_default"](image_tensor) first_gen = False - print(f'Running inference for {p}... ') + print(f"Running inference for {p}... ") start_time = perf_counter() # running inference - results = model.signatures['serving_default'](image_tensor) + results = model.signatures["serving_default"](image_tensor) elapsed_time = perf_counter() - start_time @@ -104,20 +106,20 @@ def get_category_index(label_file): viz_utils.visualize_boxes_and_labels_on_image_array( image_np_with_detections[0], - result['boxes'][0], - (result['classes'][0] + label_id_offset).astype(int), - result['scores'][0], + result["boxes"][0], + (result["classes"][0] + label_id_offset).astype(int), + result["scores"][0], category_index, use_normalized_coordinates=True, max_boxes_to_draw=200, - min_score_thresh=.10, - agnostic_mode=False) + min_score_thresh=0.10, + agnostic_mode=False, + ) file_name = Path(image_path).stem plt.figure(figsize=(32, 18)) - plt.title(f'Pylot (TF) - {m} - {p} - {elapsed_time*1000:.0f}ms', - fontsize=30) + plt.title(f"Pylot (TF) - {m} - {p} - {elapsed_time*1000:.0f}ms", fontsize=30) plt.imshow(image_np_with_detections[0]) - plt.savefig(f'{IMAGE_BASE_FOLDER}/result/{file_name}_TF_{m}.jpg') + plt.savefig(f"{IMAGE_BASE_FOLDER}/result/{file_name}_TF_{m}.jpg") plt.close() diff --git a/doc/perception/experiments/object-detection-model_evaluation/yolo.py b/doc/perception/experiments/object-detection-model_evaluation/yolo.py index 39d727b7..b4949622 100644 --- a/doc/perception/experiments/object-detection-model_evaluation/yolo.py +++ b/doc/perception/experiments/object-detection-model_evaluation/yolo.py @@ -1,9 +1,9 @@ -''' +""" Docs: https://docs.ultralytics.com/modes/predict/ https://docs.ultralytics.com/tasks/detect/#models https://docs.ultralytics.com/models/yolo-nas -''' +""" import os from globals import IMAGE_BASE_FOLDER, IMAGES_FOR_TEST @@ -12,33 +12,34 @@ import torch ALL_MODELS = { - 'yolov8n': YOLO, - 'yolov8s': YOLO, - 'yolov8m': YOLO, - 'yolov8l': YOLO, - 'yolov8x': YOLO, - 'yolo_nas_l': NAS, - 'yolo_nas_m': NAS, - 'yolo_nas_s': NAS, - 'rtdetr-l': RTDETR, - 'rtdetr-x': RTDETR, - 'yolov8x-seg': YOLO, - 'sam-l': SAM, - 'FastSAM-x': FastSAM, + "yolov8n": YOLO, + "yolov8s": YOLO, + "yolov8m": YOLO, + "yolov8l": YOLO, + "yolov8x": YOLO, + "yolo_nas_l": NAS, + "yolo_nas_m": NAS, + "yolo_nas_s": NAS, + "rtdetr-l": RTDETR, + "rtdetr-x": RTDETR, + "yolov8x-seg": YOLO, + "sam-l": SAM, + "FastSAM-x": FastSAM, } with torch.inference_mode(): for m, wrapper in ALL_MODELS.items(): - print('Selected model: ' + m) - model_path = os.path.join('yolo', m + '.pt') + print("Selected model: " + m) + model_path = os.path.join("yolo", m + ".pt") model = wrapper(model_path) for p in IMAGES_FOR_TEST: image_path = os.path.join(IMAGE_BASE_FOLDER, IMAGES_FOR_TEST[p]) img = Image.open(image_path) - _ = model.predict(source=img, save=True, save_conf=True, - line_width=1, half=True) + _ = model.predict( + source=img, save=True, save_conf=True, line_width=1, half=True + ) del model diff --git a/doc/research/paf23/planning/test_traj.py b/doc/research/paf23/planning/test_traj.py index 97283e6c..1e9edc71 100644 --- a/doc/research/paf23/planning/test_traj.py +++ b/doc/research/paf23/planning/test_traj.py @@ -1,18 +1,22 @@ -from frenet_optimal_trajectory_planner.FrenetOptimalTrajectory.fot_wrapper \ - import run_fot +from frenet_optimal_trajectory_planner.FrenetOptimalTrajectory.fot_wrapper import ( + run_fot, +) import numpy as np import matplotlib.pyplot as plt -wp = wp = np.r_[[np.full((50), 983.5889666959667)], - [np.linspace(5370.016106881272, 5399.016106881272, 50)]].T +wp = wp = np.r_[ + [np.full((50), 983.5889666959667)], + [np.linspace(5370.016106881272, 5399.016106881272, 50)], +].T initial_conditions = { - 'ps': 0, - 'target_speed': 6, - 'pos': np.array([983.5807552562393, 5370.014637890163]), - 'vel': np.array([5, 1]), - 'wp': wp, - 'obs': np.array([[983.568124548765, 5386.0219828457075, - 983.628124548765, 5386.0219828457075]]) + "ps": 0, + "target_speed": 6, + "pos": np.array([983.5807552562393, 5370.014637890163]), + "vel": np.array([5, 1]), + "wp": wp, + "obs": np.array( + [[983.568124548765, 5386.0219828457075, 983.628124548765, 5386.0219828457075]] + ), } hyperparameters = { @@ -39,9 +43,21 @@ "num_threads": 0, # set 0 to avoid using threaded algorithm } -result_x, result_y, speeds, ix, iy, iyaw, d, s, speeds_x, \ - speeds_y, misc, costs, success = run_fot(initial_conditions, - hyperparameters) +( + result_x, + result_y, + speeds, + ix, + iy, + iyaw, + d, + s, + speeds_x, + speeds_y, + misc, + costs, + success, +) = run_fot(initial_conditions, hyperparameters) if success: print("Success!") @@ -50,12 +66,18 @@ fig, ax = plt.subplots(1, 2) ax[0].scatter(wp[:, 0], wp[:, 1], label="original") - ax[0].scatter([983.568124548765, 983.628124548765], - [5386.0219828457075, 5386.0219828457075], label="object") + ax[0].scatter( + [983.568124548765, 983.628124548765], + [5386.0219828457075, 5386.0219828457075], + label="object", + ) ax[0].set_xticks([983.518124548765, 983.598124548765]) ax[1].scatter(result_x, result_y, label="frenet") - ax[1].scatter([983.568124548765, 983.628124548765], - [5386.0219828457075, 5386.0219828457075], label="object") + ax[1].scatter( + [983.568124548765, 983.628124548765], + [5386.0219828457075, 5386.0219828457075], + label="object", + ) ax[1].set_xticks([983.518124548765, 983.598124548765]) plt.legend() plt.show() From daf2255f217aeec3654f2747c56dba298ec23c93 Mon Sep 17 00:00:00 2001 From: JulianTrommer Date: Tue, 15 Oct 2024 15:56:49 +0200 Subject: [PATCH 28/28] Refactored repo with linters --- .vscode/settings.json | 1 - build/agent_service.yaml | 4 ++ build/docker-compose.dev.yaml | 4 +- build/docker/agent-dev/dev_entrypoint.sh | 16 +---- build/docker/agent/Dockerfile | 3 +- build/docker/agent/Dockerfile_Submission | 35 ++++----- build/docker/agent/entrypoint.sh | 4 +- code/__init__.py | 0 code/perception/launch/perception.launch | 4 +- .../traffic_light_training.py | 8 +-- code/perception/src/traffic_light_node.py | 4 +- code/perception/src/vision_node.py | 4 +- code/planning/__init__.py | 1 - code/planning/src/behavior_agent/__init__.py | 0 .../src/behavior_agent/behavior_tree.py | 72 ++++++++----------- .../src/behavior_agent/behaviours/__init__.py | 3 - .../behavior_agent/behaviours/intersection.py | 12 ++-- .../behavior_agent/behaviours/lane_change.py | 2 +- .../behavior_agent/behaviours/maneuvers.py | 2 +- .../src/behavior_agent/behaviours/overtake.py | 3 +- .../src/local_planner/motion_planning.py | 7 +- doc/development/templates/template_class.py | 17 +++-- .../templates/template_class_no_comments.py | 8 +-- .../object-detection-model_evaluation/pt.py | 4 +- 24 files changed, 101 insertions(+), 117 deletions(-) delete mode 100644 code/__init__.py delete mode 100755 code/planning/__init__.py delete mode 100755 code/planning/src/behavior_agent/__init__.py mode change 100755 => 100644 code/planning/src/behavior_agent/behaviours/__init__.py diff --git a/.vscode/settings.json b/.vscode/settings.json index 817fbe31..260ba70c 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -1,7 +1,6 @@ { "githubIssues.issueBranchTitle": "${issueNumber}-${sanitizedIssueTitle}", "githubIssues.queries": [ - { "label": "My Issues", "query": "default" diff --git a/build/agent_service.yaml b/build/agent_service.yaml index 8266fe9d..13d8bab1 100644 --- a/build/agent_service.yaml +++ b/build/agent_service.yaml @@ -3,6 +3,10 @@ services: build: dockerfile: build/docker/agent/Dockerfile context: ../ + args: + USERNAME: ${USERNAME} + USER_UID: ${USER_UID} + USER_GID: ${USER_GID} init: true tty: true shm_size: 2gb diff --git a/build/docker-compose.dev.yaml b/build/docker-compose.dev.yaml index 95d06f11..ab21bf03 100644 --- a/build/docker-compose.dev.yaml +++ b/build/docker-compose.dev.yaml @@ -1,5 +1,8 @@ # compose file for the development without a driving vehicle # "interactive" development without a car +include: + - roscore_service.yaml + services: agent-dev: build: @@ -24,5 +27,4 @@ services: - DISPLAY=${DISPLAY} network_mode: host privileged: true - entrypoint: ["/dev_entrypoint.sh"] command: bash -c "sudo chown -R ${USER_UID}:${USER_GID} ../ && sudo chmod -R a+w ../ && bash" diff --git a/build/docker/agent-dev/dev_entrypoint.sh b/build/docker/agent-dev/dev_entrypoint.sh index 14f912e3..2626fcb9 100755 --- a/build/docker/agent-dev/dev_entrypoint.sh +++ b/build/docker/agent-dev/dev_entrypoint.sh @@ -1,19 +1,7 @@ #!/bin/bash +set -e -# Source ROS setup source /opt/ros/noetic/setup.bash - -# Source the catkin workspace setup source /catkin_ws/devel/setup.bash -# Set up any additional environment variables if needed -export CARLA_ROOT=/opt/carla -export SCENARIO_RUNNER_ROOT=/opt/scenario_runner -export LEADERBOARD_ROOT=/opt/leaderboard - -# Execute the command passed to the script, or start a bash session if no command was given -if [ $# -eq 0 ]; then - exec bash -else - exec "$@" -fi \ No newline at end of file +exec "$@" diff --git a/build/docker/agent/Dockerfile b/build/docker/agent/Dockerfile index c8c05ceb..dff54814 100644 --- a/build/docker/agent/Dockerfile +++ b/build/docker/agent/Dockerfile @@ -170,12 +170,11 @@ RUN source /opt/ros/noetic/setup.bash && catkin_make ADD ./build/docker/agent/entrypoint.sh /entrypoint.sh - - # set the default working directory to the code WORKDIR /workspace/code RUN echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc +RUN echo "source /catkin_ws/devel/setup.bash" >> ~/.bashrc ENTRYPOINT ["/entrypoint.sh"] CMD ["bash", "-c", "sleep 10 && \ diff --git a/build/docker/agent/Dockerfile_Submission b/build/docker/agent/Dockerfile_Submission index 128a8bd8..8128266e 100644 --- a/build/docker/agent/Dockerfile_Submission +++ b/build/docker/agent/Dockerfile_Submission @@ -19,7 +19,7 @@ ARG DEBIAN_FRONTEND=noninteractive # install rendering dependencies for rviz / rqt RUN apt-get update \ - && apt-get install -y -qq --no-install-recommends \ + && apt-get install -y -qq --no-install-recommends \ libxext6 libx11-6 libglvnd0 libgl1 \ libglx0 libegl1 freeglut3-dev @@ -27,10 +27,10 @@ RUN apt-get update \ RUN apt-get install wget unzip RUN wget https://github.com/una-auxme/paf/releases/download/v0.0.1/PythonAPI_Leaderboard-2.0.zip -O PythonAPI.zip \ - && unzip PythonAPI.zip \ - && rm PythonAPI.zip \ - && mkdir -p /opt/carla \ - && mv PythonAPI /opt/carla/PythonAPI + && unzip PythonAPI.zip \ + && rm PythonAPI.zip \ + && mkdir -p /opt/carla \ + && mv PythonAPI /opt/carla/PythonAPI # build libgit2 RUN wget https://github.com/libgit2/libgit2/archive/refs/tags/v1.5.0.tar.gz -O libgit2-1.5.0.tar.gz \ @@ -67,12 +67,12 @@ ENV PYTHONPATH=$PYTHONPATH:/opt/carla/PythonAPI/carla/dist/carla-0.9.14-py3.7-li # install mlocate, pip, wget, git and some ROS dependencies for building the CARLA ROS bridge RUN apt-get update && apt-get install -y \ - mlocate python3-pip wget git python-is-python3 \ - ros-noetic-ackermann-msgs ros-noetic-derived-object-msgs \ - ros-noetic-carla-msgs ros-noetic-pcl-conversions \ - ros-noetic-rviz ros-noetic-rqt ros-noetic-pcl-ros ros-noetic-rosbridge-suite ros-noetic-rosbridge-server \ - ros-noetic-robot-pose-ekf ros-noetic-ros-numpy \ - ros-noetic-py-trees-ros ros-noetic-rqt-py-trees ros-noetic-rqt-reconfigure + mlocate python3-pip wget git python-is-python3 \ + ros-noetic-ackermann-msgs ros-noetic-derived-object-msgs \ + ros-noetic-carla-msgs ros-noetic-pcl-conversions \ + ros-noetic-rviz ros-noetic-rqt ros-noetic-pcl-ros ros-noetic-rosbridge-suite ros-noetic-rosbridge-server \ + ros-noetic-robot-pose-ekf ros-noetic-ros-numpy \ + ros-noetic-py-trees-ros ros-noetic-rqt-py-trees ros-noetic-rqt-reconfigure SHELL ["/bin/bash", "-c"] @@ -178,12 +178,13 @@ ADD ./build/docker/agent/entrypoint.sh /entrypoint.sh WORKDIR /workspace/code RUN echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc +RUN echo "source /catkin_ws/devel/setup.bash" >> ~/.bashrc ENTRYPOINT ["/entrypoint.sh"] CMD ["bash", "-c", "sleep 10 && python3 /opt/leaderboard/leaderboard/leaderboard_evaluator.py --debug=${DEBUG_CHALLENGE} \ ---repetitions=${REPETITIONS} \ ---checkpoint=${CHECKPOINT_ENDPOINT} \ ---track=${CHALLENGE_TRACK} \ ---agent=${TEAM_AGENT} \ ---routes=${ROUTES} \ ---host=${CARLA_SIM_HOST}"] + --repetitions=${REPETITIONS} \ + --checkpoint=${CHECKPOINT_ENDPOINT} \ + --track=${CHALLENGE_TRACK} \ + --agent=${TEAM_AGENT} \ + --routes=${ROUTES} \ + --host=${CARLA_SIM_HOST}"] diff --git a/build/docker/agent/entrypoint.sh b/build/docker/agent/entrypoint.sh index 61e51dc4..2626fcb9 100755 --- a/build/docker/agent/entrypoint.sh +++ b/build/docker/agent/entrypoint.sh @@ -1,7 +1,7 @@ #!/bin/bash set -e -source "/opt/ros/noetic/setup.bash" -source "/catkin_ws/devel/setup.bash" +source /opt/ros/noetic/setup.bash +source /catkin_ws/devel/setup.bash exec "$@" diff --git a/code/__init__.py b/code/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/code/perception/launch/perception.launch b/code/perception/launch/perception.launch index 45d970be..8d6072e7 100644 --- a/code/perception/launch/perception.launch +++ b/code/perception/launch/perception.launch @@ -41,8 +41,8 @@