diff --git a/b b/b
index b819b9b8792adc4b207fb1a01855dbb2ac25e4a0..a6d5b193aa6c6d525828e9de7677701f3e4b1252 160000
--- a/b
+++ b/b
@@ -1 +1 @@
-Subproject commit b819b9b8792adc4b207fb1a01855dbb2ac25e4a0
+Subproject commit a6d5b193aa6c6d525828e9de7677701f3e4b1252
diff --git a/dox/cmds.cpp b/dox/cmds.cpp
new file mode 120000
index 0000000000000000000000000000000000000000..fcee97203a0a1f31526b8214859b405b153c3612
--- /dev/null
+++ b/dox/cmds.cpp
@@ -0,0 +1 @@
+/home/sugu/dox/cmds/cmds.cpp
\ No newline at end of file
diff --git a/dox/notes b/dox/notes
index bdede4a34b78e7739c5c96e53296ef1d3a87e9b5..d8b610e6decd12872a63c6c0f205ab6cc575bac1 100644
--- a/dox/notes
+++ b/dox/notes
@@ -15,239 +15,9 @@ get feedback (occassionaly)
 ## Software A - Frame grab - recorder - ctb
 #############################################################
 
-# setup Vimba
-	> see Vimba_installation_under_Linux.pdf
-	> download and unpack Vimba SDK
-	> tar -xzf ./Vimba.tgz
-	> sudo ./VimbaGigETL/Install.sh
 
-## SETUP GT1920C:
-> connect lap and cam directly via ethernet
-> set MTU to 8228 (jumbo frames/packets)
-> set fixed IP for eth-adapter first and then cam (use Vimbaviewer -> force IP)
-	eth 169.254.100.1
-	cam 169.254.x.x # on restart will pick random IP...
 
-	mac
-		000f310338D3
-		000f310338D4
-	sub 255.255.0.0
-  (gat 0.0.0.0 local / none)
-	-> ip address of each adapter needs to be on a unique subnet
-	-> for multiple cams calc bandwith and use switch
 
-# What bandwidth do i have? Do i need multiple ports?
-bandwith = fps * pixel format(bpp) * resolution (* ncams)
-StreamBytesPerSecond = 17 * 3 * 1936 * 1216 * 1 = 8 456 448 ~= 8,4 MBps < 125MBps
-
-	1. Determine max_speed with highest fps!
-	2. Take max_speed and reduce fps so it still fits 2*max_speed
-		-> subsample same video ?
-	3. calc mean_err through comparing 1. and 2. -> add to 2. as it will be lower.
-
-	!! exposure and fps: on 10fps exposure can't be more then 100ms!
-	!! exposure and fps: on 17fps exposure can't be more then 59ms!
-	Best practice: set gain to lowest possible val and increase exposure as needed
-	!! if you use more than one cam on one interface, the available bandwidth has to be shared between interfaces.
-
-###############################
-# bandwidth calculations for Alvium G1 240C
-MAX BANDWITH for Gigabit ethernet port ~= 125 Mbps
-max res: 1936x1216
-max fps: 49
-pixelformat: RGB8  > different options. eg 10bit/12bit RGB/MONO.
-	1936 * 1216 * 49 * 1 ~= 115,3 MBps >> MONO8 @ maxFPS
-	1936 * 1216 * 17 * 1 ~= 40,0 MBps >> MONO8 @ 17FPS
-		@ 1 Min -> 2.4GB
-		@ 30 Mins -> 144GB
-		with 60 recordings: 8.6TB
-	1936 * 1216 * 17 * 3 ~= 120,1 MBps >> RGB8 @ 17FPS
-		@ 1 Min -> 7.2GB
-		@ 30 Mins -> 216.1GB
-		with 60 recordings: 13 TB
-
-	1936 * 1216 * 10 * 3 ~= 70,6 MBps
-		@ 1 Min -> 4.2 GB
-		@ 30 Mins -> 127.1 GB
-		with 60 recordings: 7.6TB
-
-Storage old ws:
-	2TB: 2*1TB (7200RPM = 6GB/s)
-	8Gb DDR3 RAM
-	Intel Xeon 3.6Ghz (4 cores)
-
-	default bandwidth:115MBps
--------------------------------
-Save "reasonable" settings in XML.
-If dark, set ExposureTimeAbs
-Q: how to set fps to 3?
-
-
-#####################################
-## Background Substraction - BGS
-####################################
-
-
-#define bgs_register(x) static BGS_Register<x> register_##x(quote(x))
-## >> glue together macro?
-quote() adds ""
-
-bgs_register(Tapter)
->> static BGS_Register<Tapter> register_Tapter("Tapter")
-
-/usr/include/opencv4/opencv2/opencv.hpp
-PCA?
-
-
-> difference
-	virtual();
-	virtual(){ /*empty*/ }
-	virtual() = 0; #pure virtual
-
-> also difference
-	> virtual dtor(): if pointer to base-class deletes object
-	> pure virtual dtor(): need to also define function body, cuz dtor is special function whis is not overriden
-	> interface class (needs pure virtual??)
-	> abc - abstract base class. can't be instantiated anymore
-	> abc <> interface?
-
-
-	// IplImage is oldskool mat and not supported anymore..
-
-	> use smartpointer like so:
-	auto videoAnalysis = std::make_unique<VideoAnalysis>();
- 		videoCapture = std::make_unique<VideoCapture>();
-      	frameProcessor = std::make_shared<FrameProcessor>();
-
-
-## libarchive stuff
-###############################
-	archive_read_xxx()
-	archive_write_xxx()
-	struct archive_entry
-
-huge workaround for corrupted files
-clock
-random
-command arg parser
-
-
-
-???
-#if CV_MAJOR_VERSION > 3 || (CV_MAJOR_VERSION == 3 && CV_SUBMINOR_VERSION >= 9)
-  IplImage _frame = cvIplImage(img_input);
-  frame = &_frame;
-#else
-  frame = new IplImage(img_input);
-#endif
-
-#smartpointer??
-#elif CV_MAJOR_VERSION >= 3
-      cv::Ptr<cv::BackgroundSubtractorMOG2> mog;
-#endif
-
-###############################
-WS / VM
-		> 8 cores, 16GB RAM, 2 TB for 1cam
-		> 48 cores, 128GB RAM, 6 TB for 6cam
-###############################
-
-> What Tboy do... why did he do it?
-	> forked branch bgslib_qtgui_2.0.0
-	> Tapter
-			> adapter for model
-			> was probably copied from LBAdaptiveSOM.cpp
-		--> which is disabled in openCV4
-
-
-Which Background Subtraction Algo to use??
-	median, mean, framedifference
-		+ simple, fast,
-		- not robust if light/bg changes
-		- slow changes ?
-	> adaptive bg?
-	fuzzy?
-	mixture
-
-
-NAMES
-	Kernel?
-	LBSP?
-	Multilayer?
-	Sigma-Delta?
-	Vibe, T2F ,dp ,lb ,vumeter?
-	choquet, sugeno, zivkovic, pratimediod, LOBSTER
-
-Test/Use most common >> Ground Truth
-	Frame Difference
-	WeightedMovingMean / Variance
-	LBAdaptiveSOM
-	MOG2 (Mixture Of Gaussian) MixtureOfGaussianV2.h
-	KNN (K Nearest Neighbour)
-		> fast for small fg obj
-		> TRY!
-	FuzzySugenoIntegral.h
-
-	LSBP - Local Binary Similarity Patterns - (2013)
-	LSBP-based GSoC ?
-	SuBSENSE: improved spatiotemporal LBSP + color features (2014)
-
-	Combineable with
-		ROI
-		Canny Edge Detection
-
-
-> bg modeling to update BG (eg moving trees) > pixel with threshold
-> optic flow (camera is also moving. ) > vectoral estimation of own movement
-
-
-features
-	edge
-		canny edge detector + calc contour
-		> https://en.wikipedia.org/wiki/Canny_edge_detector
-	roi
-		crop
-	color
-		RGB - not so robust by itself (sensitive to illumination, shadows, oscillations ...)
-		YUV
-		YCrCb - brightness, chroma, color
-	texture
-		robust to illumination and shadow
-		eg Local Binary Pattern (LBP)
-
-https://github.com/murari023/awesome-background-subtraction (2021 new stuff!)
-
-https://learnopencv.com/background-subtraction-with-opencv-and-bgs-libraries/
-http://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html
-https://hackthedeveloper.com/background-subtraction-opencv-python/ #mog2 + knn in python
-https://docs.opencv.org/4.3.0/d4/dd5/classcv_1_1bgsegm_1_1BackgroundSubtractorGSOC.html#details #GSOC LSBP ALGO from openCV bgsegm.hpp
-
-https://openaccess.thecvf.com/content_cvpr_workshops_2014/W12/papers/St-Charles_Flexible_Background_Subtraction_2014_CVPR_paper.pdf
-https://www.scitepress.org/Papers/2018/66296/66296.pdf #vehicle tracking latvia. 2018, BackgroundSubtractorMOG, BackgroundSubtractorMOG2 (zivkovic)
-https://www-sop.inria.fr/members/Francois.Bremond/Postscript/AnhTuanAVSS14.pdf  2014
-https://arxiv.org/pdf/1803.07985.pdf # visual animal tracking (2018)
-https://arxiv.org/pdf/1507.06821.pdf # Multimodal Deep Learning for Robust RGB-D Object Recognition (2015)
-https://towardsdatascience.com/background-removal-with-deep-learning-c4f2104b3157?gi=2ef3a5272e5d (2017 Background removal with deep learning)
-
-https://opencv.org/courses/ #xpensive ai course
-https://www.fast.ai/ #free ai course
-
-Build with Python or C++?
-
-
-##################
-# computervision #
-##################
-background subtraction (bgs)
-segmentation
-	Semantic Segmentation (ai)
-detection (feature, object)
-classification (category recognition)
-
-Challenges: Occlusion, (Sensor-)Noise, changing external conditions( lighting, Shadows, fog, reflection )
-
-> pre-training if lack of data
-> corrupt data to guarantee robust learning
 
 
 
diff --git a/dox/notes_A b/dox/notes_A
index b9614b05af213d22b84d021afcfccdc60e8d011a..6f86d371edadcd720fe74ba2fea4c16b8faa0779 100644
--- a/dox/notes_A
+++ b/dox/notes_A
@@ -106,3 +106,8 @@ Prosilica GX:
 			> give name connect inet
 			--> as terminal to connect to server
 			--> later: use for DMX lighting
+
+
+# WS / VM
+		> 8 cores, 16GB RAM, 2 TB for 1cam
+		> 48 cores, 128GB RAM, 6 TB for 6cam
diff --git a/dox/notes_B b/dox/notes_B
index b52b7e2e6b8dda0b47877289b9ef2f9f2bc18d70..f1074a5e4cd182c5e2b1f00626ee51d97c2d605c 100644
--- a/dox/notes_B
+++ b/dox/notes_B
@@ -16,6 +16,17 @@ map cam -> MAC -> IP -> name (contains ID)
 	camtron4 000A471D2A66 172.18.227.213 allied-alviumg1-240c-04ytm.idiv.de
 	camtron5 000A471D208D 172.18.225.129 allied-alviumg1-240c-04ytt.idiv.de
 	camtron6 000A47139EA6 172.18.227.215 allied-alviumg1-240c-04ytv.idiv.de
+		direct: 169.254.75.147
+
+
+camtron1///
+	169.254.158.10
+	fixed ip: 172.18.205.201 + 255.255.255.0
+	gateway should: 172.18.205.254
+	gateway is: 0.0.0.0
+
+wired settings
+169.254.100.3 255.255.0.0
 
 ## connect VM + setup ssh
 ssh kr69sugu@idivtibcam01.usr.idiv.de
@@ -34,7 +45,7 @@ ssh kr69sugu@idivtibcam01.usr.idiv.de
 		IMWRITE_JPEG_QUALITY 100; IMWRITE_JPEG_OPTIMIZE 1; IMWRITE_JPEG_RST_INTERVAL 4;
 
 
-# build/setup VimbaX
+## setup VimbaX
 	> download from https://www.alliedvision.com/en/products/software/vimba-x-sdk/
 	> see Vimba_installation_under_Linux.pdf
 	> unpack Vimba SDK
@@ -120,6 +131,23 @@ frames = img data + ancillaryData
 GenICam - camera standard
 TL - Transport Layer - transports data from cam to sw
 
+# DeviceTemperatureSelector get temp of cam!
+TimestampLatch
+TimestampReset
+TimestampLatchValue
+
+Statistics (sub cat)
+	StatFrameRate
+	StatFramesDelivered
+	StatFramesDropped
+	...
+
+[UserSetSelector]
+UserSetLoad
+UserSetSave
+
+CurrentIPAddress
+
 
 Buffer management
 ###############################
@@ -296,3 +324,52 @@ EventCameraDiscovery -> listen to find plugged cams
 		17x1936x1456=47mio -> 17fps @ full res
 		suggested CPU i7 3840
 		reduce ROI
+
+
+# What bandwidth do i have? Do i need multiple ports?
+###############################
+bandwith = fps * pixel format(bpp) * resolution (* ncams)
+StreamBytesPerSecond = 17 * 3 * 1936 * 1216 * 1 = 8 456 448 ~= 8,4 MBps < 125MBps
+
+	1. Determine max_speed with highest fps!
+	2. Take max_speed and reduce fps so it still fits 2*max_speed
+		-> subsample same video ?
+	3. calc mean_err through comparing 1. and 2. -> add to 2. as it will be lower.
+
+	!! exposure and fps: on 10fps exposure can't be more then 100ms!
+	!! exposure and fps: on 17fps exposure can't be more then 59ms!
+	Best practice: set gain to lowest possible val and increase exposure as needed
+	!! if you use more than one cam on one interface, the available bandwidth has to be shared between interfaces.
+
+###############################
+# bandwidth calculations for Alvium G1 240C
+MAX BANDWITH for Gigabit ethernet port ~= 125 Mbps
+max res: 1936x1216
+max fps: 49
+pixelformat: RGB8  > different options. eg 10bit/12bit RGB/MONO.
+	1936 * 1216 * 49 * 1 ~= 115,3 MBps >> MONO8 @ maxFPS
+	1936 * 1216 * 17 * 1 ~= 40,0 MBps >> MONO8 @ 17FPS
+		@ 1 Min -> 2.4GB
+		@ 30 Mins -> 144GB
+		with 60 recordings: 8.6TB
+	1936 * 1216 * 17 * 3 ~= 120,1 MBps >> RGB8 @ 17FPS
+		@ 1 Min -> 7.2GB
+		@ 30 Mins -> 216.1GB
+		with 60 recordings: 13 TB
+
+	1936 * 1216 * 10 * 3 ~= 70,6 MBps
+		@ 1 Min -> 4.2 GB
+		@ 30 Mins -> 127.1 GB
+		with 60 recordings: 7.6TB
+
+Storage old ws:
+	2TB: 2*1TB (7200RPM = 6GB/s)
+	8Gb DDR3 RAM
+	Intel Xeon 3.6Ghz (4 cores)
+
+	default bandwidth:115MBps
+-------------------------------
+Save "reasonable" settings in XML.
+If dark, set ExposureTimeAbs
+Q: how to set fps to 3?
+###############################
diff --git a/dox/notes_C b/dox/notes_C
index 773a16a37c9e1bbae0da9bb78f090f37fd4e3c06..1ebb5f34cbee9990ac31cb82cec0404d6bfd89da 100644
--- a/dox/notes_C
+++ b/dox/notes_C
@@ -4,7 +4,7 @@
 git@gitlab.idiv.de:sugu/camtron.git
 
 Does
-	> background subtraction
+	> background subtraction // bgs
 	> calculates centroid points of all frames in a record
 ###############################
 
@@ -39,8 +39,8 @@ PCA?
       	frameProcessor = std::make_shared<FrameProcessor>();
 
 
+## libarchive stuff
 ###########
-libarchive stuff
 	archive_read_xxx()
 	archive_write_xxx()
 	struct archive_entry
diff --git a/dox/timeplan-milestones.ods b/dox/timeplan-milestones.ods
index f6177144252286938371e5018a87c4f4c0619189..af243d44fd0f261616d34b706e7bf70aa9c71c3c 100644
Binary files a/dox/timeplan-milestones.ods and b/dox/timeplan-milestones.ods differ
diff --git a/dox/todo_ct b/dox/todo_ct
index bff9bf63c54e4203c8f6a39c60bfabc771a1b982..2c62887235dce59e97a23e1a1d69a4bc2b8a2893 100644
--- a/dox/todo_ct
+++ b/dox/todo_ct
@@ -8,9 +8,9 @@ CAMTRON PROCESSING PIPELINE
 ###########################
 "tackle long-term complex projects from beginning to end"
 
-- keep up new work habit - consistently work 20-30 hw⁻¹
-- use copilot more efficiently
-- ‼️cam work with VM? (MTU)
+- ⏲️
+- 🗣️  sebastian hostname-fix IP
+- ‼️ cam works with VM?
 	read docu, write alliedvision
 	-- if yes.
 		test (VV)
@@ -18,10 +18,10 @@ CAMTRON PROCESSING PIPELINE
 	-- if no
 		- communicate w sebastian 1rst.
 		- communicate w uli
-- ‼️HPC. store some data. delete rest.
-	- write mail XX
-	- write joerdis
-- ‼️ mv big ct-data to hdd
+- ‼️  HPC. store some data 2018/2019 + vids. delete rest. write HPC guy
+- ‼️  mv big ct-data to hdd
+- use copilot more efficiently
+	- read Tuts, configure
 
 
 
@@ -34,42 +34,31 @@ B) recorder - camera produces frames and timestamps
 	- wait and detect cams
 	- ‼️ produce frames & timestamps
 ============================
-	- new architecture: console app (later: gui app) + Core.
-	CONSOLE APP:
-		in extra thread to not block main thread!
-		listen to user keys
-
-	CORE APP
-			functions / args:
-			- out_dir:
-				 parse dir arg and create it if it doesnt exist >- finish it / with proper user in/	output >- or cancel if it doesnt exist
-				 	>> EASIER: + no userinput which could complicate GUI app later...
-			- list cameras:
-			- store/load cam settings
-			- calibrate
-			- record
-			- stop recording
-		- consider using app settings in a json-file / QSettings
-			- cam mappings (IP/MAC/ID)
-			- out_dir
-
-	GUI APP:
-		- build GUI
-		- ... parse output for -LISTCAMERAS- and put into tablewidget
+	CORE
+		- list cameras
+			- test: core::listCams worx with multiple cams?
+
+		- store/load cam settings
+		- calibrate
+			- see ada - HPC - CALI SW
+		- record
+		- stop recording
+	- consider using app settings in a json-file / QSettings
+		- cam mappings (IP/MAC/ID)
+		- out_dir
+
+	CONSOLE
+		- print: convert '\n' to linebreaks
+		-
 
 ===============================
-	- get pix, using
+	- get pix, using VV and recorder
 		- laptop - direct connection
 		- laptop - local network
 		- virtual machine
 	- get video via vimbax-SDK
-	- cam calibration (Use Viewer? Or own SW?)
-	- configure cam settings
-	* get video for 3 cams simultaneously
-		- what SW?
-		- calculate supported framerate+resolution+codec for hardware (CPU,RAM,HDD...)
-			- or do tests and see if frames are dropped
-
+	- get video for 3 cams simultaneously
+	- do tests and see if frames are dropped
 	- central config
 		- threads for started recordings [PID]
 		- storage folder
@@ -80,13 +69,13 @@ B) recorder - camera produces frames and timestamps
 
 A)
 ###########################
-	- VM
-	- light
+	- test vm, else buy ws!
+	- !! light
 		- 2*LEDs: 1*day + 1*night
 		- find ecolux hw (Boxes, DMX per unit, DMX splitter/controller)
-	- arenas
+	- !! arenas
 		- Acrylzylinder in 2 teile gesaegt.
-	- floor
+	- floor ?
 		- gips+color+lack..
 
 
@@ -132,6 +121,12 @@ X) HPC - High Performance Cluster
 
 BACK-BURNER 2D
 ###############################
+ctb
+- GUI APP:
+	- build GUI
+	- ... parse output for -LISTCAMERAS- and put into tablewidget
+- calculate supported framerate+resolution+codec for hardware (CPU,RAM,HDD...)
+
 - Documentation
 	- sketch
 		- chamber: cam height+resolution+arena diameter for pixel to cm/mm ratio
@@ -147,49 +142,19 @@ BACK-BURNER 2D
 
 Done
 #######################
-- started working 20-26 hw⁻¹ for 3 weeks now :)
-	- habit. consistency!
-- started writing C++ again (+bash,+python)
-	- setup IDE (LSP plugins), QT, codecompletion
-	- get copilot
+- print and printlist()
+- integrate versionstring in new arch
+- mark current cam [*] <-- selected. others []
+- write allied: submax MTU shouldnt be a huge performance hit!
+- write mail HPC guy
+- write joerdis
+- list 1 cam and send to console via signal-slots
+- version
+- setup IDE (LSP plugins), QT, codecompletion
+- get copilot
 - automate + fix backup script
 - get VM
-- got pictures, tested all 6 cams :)
+- direct VV: got pictures, tested all 6 cams :)
 - bup old camtrack
 - start implementing new arch
 	- console app. thread. signal-slot connections.
-- backup old tibcamtrack VM
-- get video in VV
-- connect VM. ssh and rdp
-- sort ct a bit:
-	- folders, files, ... naming convention
-	- README + Description
-	- merge doc?
-	- b) exclude SDK / examples from repo
-- attach cam arm to wood (DONE)
-- test run 1 cam to estimate hw requirements
-	- test bottleneck (CPU,HDD,ethernet,)
-	- storage: 10s hochrechnen
-		- mit 25/17/10 FPS.
-		- different compression types
-
-- research suitable "fridges"
-	-- price!, temp-range, electricity inside, size space in lab?
-	-- more than 2
-		Reach In Plant Growth Chambers E-36L1
-		---------------------
-	- setup stativ, use our lab
-	- how-to connect ETHernet cable? it will have to go outside of fridge??
-	- how-to control temperature and humidity? (see picture from thomas dox)
-- research WS (see hardware_notes.ods for details like specs. CPU, RAM, SSDs, prices)
-- cam aufhaengung
-	research stativ/tripod: manfrotto arm + clamp + bodenplatte for cams
-	other stuff to fix cameras - stange
-- sensors for temp+humidity (HOBO)
-- backup data from towers
-- setup tower (displays,cards,cables)
--------------------------------
-- bal: func: list_suffices
-- bal: function to replace all names to snake_case (upper to lower, space/dash to underscore)
-- opd: works with "name containing spaces.pdf" -- use array for files instead of string concatenation in o_helper()
-- bug: changed IFS in o_helper broke nav function