-
Notifications
You must be signed in to change notification settings - Fork 0
/
atom.xml
1799 lines (1418 loc) · 109 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Tommie Jones</title>
<link href="http://www.atlantageek.com/atom.xml" rel="self"/>
<link href="http://www.atlantageek.com/"/>
<updated>2014-11-07T10:55:06-05:00</updated>
<id>http://www.atlantageek.com</id>
<author>
<name>Tommie Jones</name>
</author>
<entry>
<title>Detecting Seasonality</title>
<link href="http://www.atlantageek.com/2014/11/01/detecting-seasonality/"/>
<updated>2014-11-01T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/11/01/detecting-seasonality</id>
<content type="html"><h1>Automated way to detect Seasonality</h1>
<p>When working with real-world data its obvious that data does not arrive in the smooth consistent nature that your linear regression textbook suggested. Data comes in fits and starts. It comes only once a week or 3 days a week. Its sent at 11:55 PM and in the process crosses the day boundary. It reacts to holidays either taking them off or doubling in quantity. This makes forecasting and anomoly detection at the large scale difficult but not impossible.</p>
<p>In the previous <a href="http://atlantageek.com/2014/10/09/detecting-cadence/">article</a> we discussed how to identify when data is not sent by detecting its cadence. My current project has 1000s of data streams delivered every month and we needed to identify when the stream does is not sent. However data being sent is not the only problem. Another issue is that not all the data has been sent. To identify anomolies in the data streams we need to know how much data is sent how often. We need to be able to say that 'data stream 1 sends 7,000 records every week' or 'data stream 2 sends 1000 records every 2 weeks' even if the majority of those records are sent on the last friday.</p>
<p>To make these type of statements we need to:
1. Identify the data cycle. (week, month, bi-weekly)
2. aggregate the data up to the cycle and base the forecast on the cycle type period.</p>
<p>The rest of this post will focus on item 1. Below is 200 days of data for one of the streams.</p>
<p><img src="/img/salesdata.png" alt="sales data" /></p>
<p>If you look closely you would identify that this data comes in weekly cycles but almost every day some data comes in. How can we identify programatically the cycle of the data. The approach taken here is to use correlate between the current data and lagged data. Lagged data being data shifted by days. To calculate this we use the following R script.</p>
<pre><code> data &lt;- read.csv("a.csv")
lagpad &lt;- function(x, k=1) {
i&lt;-is.vector(x)
if(is.vector(x)) x&lt;-matrix(x) else x&lt;-matrix(x,nrow(x))
if(k&gt;0) {
x &lt;- rbind(matrix(rep(NA, k*ncol(x)),ncol=ncol(x)),
matrix(x[1:(nrow(x)-k),], ncol=ncol(x)))
}
else {
x &lt;- rbind(matrix(x[(-k+1):(nrow(x)),],
ncol=ncol(x)),matrix(rep(NA, -k*ncol(x)),ncol=ncol(x)))
}
if(i) x[1:length(x)] else x
}
c &lt;- c() #Initialize Correlation vector.
#Try lagging from 1 to 32 days and see which has the strongest correlation.
for (i in 1:32)
{
c[i] &lt;- cor(data$val, lagpad(data$val,i), use="complete.obs")
}
barplot(c)
</code></pre>
<p>Here we are looping thrugh lags 1-32 and correlating the data with each lag. A single correlation value is generated for each lag value. We can plot those below</p>
<p><img src="/img/correlation.png" alt="correlation" /></p>
<p>The image shows that at 7,14,21,28 day lags are the strongest correlation. If data is strongly correlated every 7 days then multiples of 7 would be strongly correlated too. This presents another problem. Looking at this graph you would see the 7 day cycle and the slightly stronger correlation at the 14 day mark could be considered an accident. However because of this we cant just take the largest value for the cycle. The lower factors should get precedence over the larger multiples even if the multiples have higher correlation.</p>
<p>The correlation can be adjusted by subtracting out the factors correlation. For instance how much would be left if the 7 day correlation was subtracted out of the 14 day correlation. We will clean up the graph by zeroing out the negative correlations also. We append the following to the previous script to get an adjusted dataset.</p>
<pre><code> adj_corr &lt;- c
for (i in 1:length(c))
{
if (c[i] &gt; 0) {
factors &lt;- unique(factorize(i))
print(factors)
for (j in 1:length(factors))
{
k = as.numeric(factors[j])
if (i != k)
{
if ((adj_corr[i] &lt; adj_corr[k]) &amp;&amp; (adj_corr[k] &gt; 0 ) )
{
adj_corr[i] &lt;- 0
}
else
{
adj_corr[i] &lt;- adj_corr[i] - adj_corr[k]
}
}
}
}
else
{
adj_corr[i] = 0
}
}
</code></pre>
<p>This results in the following adjusted correlation</p>
<p><img src="/img/adj_correlation.png" alt="correlation" /></p>
<p>The correlation for 14,21 and 28 days is much weaker when we take the 7 day correlation out.</p>
<p>Based on this graph we realized that we need to aggregate up 7 days of data to generate forecasts. This can be used to identify bi-weekly cycles or twice a week cycles or once a quarter data if you have that much data.</p>
<p>This was a simple case but what does a monthly cycle look like.</p>
<p><img src="/img/monthly_correlation.png" alt="correlation" /></p>
<p>In this case the data is send on the first of every month.</p>
<p>Here we can see that the uneven number of days in a month are causing issues. And without a single strong correlation like the data will vary wildly. For example if we aggregate up at the 30 day time period. There will be some periods where we will go 30 days without any data. And at least 1 30 day period (the one that contains feburary) a year will have twice as much data correlated to it.If we increas the period by 1 day then 6 periods a year will have twice as much data.</p>
<p>If your decisions are on a daily basis and you can handle this sort of jumps in your forecast then your algorithm needs to be calendar aware.</p>
<p>Other than monthly data this is a good technique to identify aggregation periods automatically.</p>
</content>
</entry>
<entry>
<title>Detecting the cadence of your client's data.</title>
<link href="http://www.atlantageek.com/2014/10/09/detecting-cadence/"/>
<updated>2014-10-09T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/10/09/detecting-cadence</id>
<content type="html">Often a SaaS company is dependent on the periodic data from its clients. Its challenging enough just to get your clients set up and sending data. However in the interest of client service and to predict that there is trouble you often want to detect when data delivery has being interrupted. This could either be something that broke but not identified yet or the client is planning on firing you and you may want to go on the offensive to win them back. Detecting interruptions is not that difficult except in the SaaS world you’re often playing a numbers game. You have hundreds or thousands of clients. These clients could each have their own Cadence of data delivery (daily, weekly, monthly, every other tuesday.) So its difficult to identify when a client is late.
<p>
I’ve seen a similar problem at work and we tried multiple solutions but only one allowed us to not have to scan hundreds of table rows and graphs to identify interruptions in data delivery. The most recent solution has had some success by calculating a score based on the predicted cadence and the amount of time its been since the last data delivery.
<p>
Previous solutions attempted to show a graph so the user could see at a glance if a data delivery was late. But this was error prone. Other attempts were made to classify clients as daily/weekly/monthly because these were the terms we used. . Thinking that tracking daily uploads would look something like this.
<p>
Daily: <span class="bar"> 1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1 </span>
<p>
Weekly: <span class="bar"> 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0, 1,0,0,0,0,0,0</span>
<p>
Monthly:<span class="bar"> 1,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0, 1,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0</span>
<p>
But the data was never this clean. Days would be <br>skipped
<span class="bar"> 1,1,1,0,1,0,0,1,1,0,1,1,0,0,1,1,1,0,1,0,0,1,1,1,0,1,0,0,1,1,1,0,1,0,0,1,1,1,1,1,0,0,0,0,1,1,1,0,0,1,1,0,0,1,0,0,1,0,1,1,1 </span>
or delayed <span class="bar"> 1,1,1,1,0,1,0,1,1,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,0,1,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,1,1,1,1,0,1,0,1,1,1,1,0,0,1,1,1,1,0,0 </span>
or both <span class="bar"> 1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,0,1,1,1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,0,0,1,1,0,0,1,0,0,1,0,1,1,0,1,0,1,0,0,0,0,0,1,1,1,1,0,0 </span>
<p>
And few of our clients were sending data this often. Most sent data weekly, but not necessarily on the same day of the week.
<span class="bar"> 1,0,0,0,0,0,0, 0,0,1,0,0,0,0, 1,0,0,0,0,0,0, 0,0,0,0,0,0,1, 1,0,0,1,0,0,0, 0,0,1,0,0,0,0, 0,0,0,1,0,0,0, 0,1,0,0,0,0,0</span>
<p>
Eventually we realized that each client had its own cadency and its own level of consistency. We needed a way to identify a client's cadence and consistency. Then use those number to determine if the current gap is normal or an anomoly. <p>
We had data. We knew on what days the client sent us the data. We could identify the average and standard deviation of how long a client would go without sending us data. <br>
So with this method I can calculate the number of standard deviations the gap for that paticular day. This chart is looking back over time with the most recent data points being to the most right.
<span class="composite"> 1,0,0,0,0,0,0, 0,0,1,0,0,0,0, 1,0,0,0,0,0,0, 0,0,0,0,0,0,1, 1,0,0,1,0,0,0, 0,0,1,0,0,0,0, 0,0,0,1,0,0,0, 0,1,0,0,0,0,0</span> Whenever the blue line crossed went above 1.0 then that means the current gap was larger than 85% of the previous gaps. In our usage we contact the client when the standard deviation hits 2. This means the gap is 97% larger. Sorting the list of clients by this table gives us an idea of which clients need to be contacted.<p>
After 3 months the report has been reasonably successful. Our customer service represenatves are rarely blindsided by calls from clients asking why isnt new data appearing in their accounts. In actuality since we often cal the client we're finding that roughly 50% of the time the client knows their data transfer is down and the other 50% of the time they did not realize that they had stopped sending us data. <p>
Of course this only adds one dimension. There are requests to add data volume to the score. And customer's whose cadence stretches across a period of a year are difficult to predict. We also have to be careful with new clients who we don't have any historical data with. However this report provides the confidence in that our existing client base is covered so we can spend more time getting our new clients brought into our system.
<script type="text/javascript">
$(function() {
/* Use 'html' instead of an array of values to pass options
to a sparkline with data in the tag */
$('.bar').sparkline('html', {type: 'bar'} );
$('.composite').sparkline('html', {type: 'tristate'} );
$('.composite').sparkline([-1,-0.85,-0.7,-0.55,-0.4,-0.25,-0.1, 0,0.15,0.3,-1.0,-0.85,-0.7,-0.55, -1.0,-0.85,-0.7,-0.55,-0.4,-0.25,-0.1, 0,0.15,0.3,0.45,0.6,0.75,-1.0, -1.0,-0.85,-0.7,-1,-0.85,-0.7,-0.55, -0.4,-0.25,-1.0,-0.85,-0.7,-0.55,-0.40, -0.25,-0.1,0,-1.0,-0.85,-0.7,-0.55, -0.4,-1.0,-0.85,-0.70,-0.55,-0.4,-0.25], {type: 'line', composite: true, fillColor: null } );
});
</script>
</content>
</entry>
<entry>
<title>snmp-trap-listener-in-node-part3</title>
<link href="http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node3/"/>
<updated>2014-08-23T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node3</id>
<content type="html"><p>This article assumes you've read</p>
<ul>
<li><a href="/2014/08/23/snmp-trap-listener-in-node">Intro to node.js for snmp traps</a></li>
<li><a href="/2014/08/23/snmp-trap-listener-in-node2">Setting up Linux and building your own traps</a></li>
</ul>
<p>Setup your direcotry where you plan to do development. To build this application you will need to do the follo
wing assuming you have already installed node and npm on your system.</p>
<pre><code> npm install snmp
npm install http
npm install express
npm install util
npm install bunyan
</code></pre>
<p>The minimum code you need to write to see the snmp traps is the following.</p>
<pre><code>var os = require('os');
var snmp = require('snmpjs');
var http = require('http');
var util = require('util');
var trapd = snmp.createTrapListener();
trapd.on('trap', function(msg){
result.push(msg);
var now = new Date();
console.log("Trap Received " + now);
console.log(util.inspect(snmp.message.serializer(msg)['pdu'], false, null));
console.log(result.length);
});
trapd.bind({family: 'udp4', port: 162});
</code></pre>
<p>You'll see that trapd.on</p>
<p>Using the snmptrap command from the previous article</p>
<pre><code> snmptrap -v 1 -m +TRAP-TEST-MIB -c public localhost TRAP-TEST-MIB::demotraps localhost 6 17 '' SNMPv2-MIB::sysLocation.0 s "You were here"
</code></pre>
<p>The output from the node output will be</p>
<pre><code>Trap Received Sat Aug 30 2014 23:42:42 GMT-0400 (EDT)
{ op: 'Trap(4)',
enterprise: '1.3.6.1.4.1.2021.13.990',
agent_addr: '127.0.0.1',
generic_trap: 6,
specific_trap: 17,
time_stamp: 18277502,
varbinds:
[ { oid: '1.3.6.1.2.1.1.6.0',
typename: 'OctetString',
value: &lt;Buffer 59 6f 75 20 77 65 72 65 20 68 65 72 65&gt;,
string_value: 'You were here' } ] }
</code></pre>
<p>The strength of using node.js comes in when you want to integrate a web server with this functionality. In other programming languages to integrate a web server with a snmptrap server you would need to create multiple threads or to use the unix select command to listen on multiple file handles. Both methods are kludgy. With node.js you just configure both a trap listener and a web server with a callback method and it just works. Here is a simple web server using the express network.</p>
<pre><code>var os = require('os');
var snmp = require('snmpjs');
var http = require('http');
var express = require('express');
var util = require('util');
var app = express();
var result=[];
app.use(express.static('public'));
app.get('/get_today_count', function(req, res) {
console.log(result.length);
res.send(result.length.toString());
});
var server = app.listen(3001, function() {
console.log('Listening on port %d', server.address().port);
});
var trapd = snmp.createTrapListener();
trapd.on('trap', function(msg){
result.push(msg);
var now = new Date();
console.log("Trap Received " + now);
console.log(util.inspect(snmp.message.serializer(msg)['pdu'], false, null));
console.log(result.length);
});
trapd.bind({family: 'udp4', port: 162});
</code></pre>
<p>This code uses the express modlue. So any calls to static files will go to a public directory. The initial index.html file will run javascript that every few seconds makes an ajax cll to /get_today_count. To get this server along with the public/index.html pull the code from <a href="https://github.com/atlantageek/node-and-snmp/tree/master/code">github</a>.</p>
<p>The screen will look like this:
<img src="https://raw.githubusercontent.com/atlantageek/node-and-snmp/master/images/snmp_count.png" alt="screenshot" /></p>
<p>Part 4 is coming</p>
</content>
</entry>
<entry>
<title>snmp-trap-listener-in-node-part2</title>
<link href="http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node2/"/>
<updated>2014-08-23T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node2</id>
<content type="html"><h1>Get your system to build traps</h1>
<p>This article assumes you've read <a href="/2014/08/23/snmp-trap-listener-in-node">the first article</a></p>
<p>Im going to assume a linux environment here.</p>
<p>The first step is to start generating traps in your test environment. I'm borrowing a lot of this on <a href="http://technotes.twosmallcoins.com/?p=369">How to create a test trap</a></p>
<p>First we need to install net-snmp-utils. This will give us a few command line tools and make sure the corresponding libraries are available. Now you should be able to type in snmptrap command and get a usage listing.</p>
<h2>STEP 1</h2>
<p>To install you do:</p>
<pre><code>$ sudo yum install net-snmp-utils net-snmp-devel
</code></pre>
<h2>STEP 2</h2>
<p>I'm not sure what test traps you have installed on the server so the next step is to create a test trap. To create a test trap is to we need to generate a mib file. On Fedora systems you will find these in the /usr/share/snmp/mibs directory.</p>
<p>If you want to confirm the directory run net-snmp-config --snmpconfpath to see the path of where mib files are searched for.</p>
<p>Use your favorite editor and create a test trap as follows:</p>
<p>TRAP-TEST-MIB.txt</p>
<pre><code>TRAP-TEST-MIB DEFINITIONS ::= BEGIN
IMPORTS ucdExperimental FROM UCD-SNMP-MIB;
demotraps OBJECT IDENTIFIER ::= { ucdExperimental 990 }
demo-trap TRAP-TYPE
STATUS current
ENTERPRISE demotraps
VARIABLES { sysLocation }
DESCRIPTION "This is just a demo"
::= 17
END
</code></pre>
<h2>STEP 3</h2>
<p>To send this trap we do the following</p>
<p>snmptrap -v 1 -m +TRAP-TEST-MIB -c public localhost TRAP-TEST-MIB::demotraps localhost 6 17 '' SNMPv2-MIB::sysLocation.0 s "You were here"</p>
<p>Breaking down the line is as follows.</p>
<ul>
<li>'snmptrap' - command to send a trap</li>
<li>'-v 1' - We are sending a SNMP version 1 trap</li>
<li>'-m +TRAP-TEST-MIB' Look in the config path and load the mib file 'TRAP-TEST-MIB'</li>
<li>'-c public' specifies that the community string is 'public'. A community string is like a password.</li>
<li>'localhost' The server we are sending the trap too</li>
<li>'TRAP-TEST-MIB::demotraps' - Which trap in TRAP-TEST-MIB that is being sent.</li>
<li>'localhost' Where the trap is coming from</li>
<li>'6' Trap type. Can be a number from 1-6 (0-coldstart, 1-warmstart, 2-linkdown, 3-linkup, 4-authenticate failure, 5-egp neighborlost, 6-enterprise specific</li>
<li>17 - Trap ID , you see it in the mib file above.</li>
</ul>
<p>Complete.
<a href="/2014/08/23/snmp-trap-listener-in-node3">Next step is a server to see the trap. </a></p>
</content>
</entry>
<entry>
<title>snmp-trap-listener-in-node</title>
<link href="http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node/"/>
<updated>2014-08-23T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/08/23/snmp-trap-listener-in-node</id>
<content type="html"><h1>Developing a Trap Listener in Node.js</h1>
<p>A challenge in developing network monitoring applications is the need to listen to multiple inputs. A modern trap listener with a web interface must service both traps coming from the network as well as web requests coming from users or other service. There were acouple of ways to handle this but they always came with tradeoffs.</p>
<h2>Threads</h2>
<p>Threads are one solution. One thread could listen on the snmp trap port and another listenr could monitor on the http port. However since these threads were often seperate processes it was difficult to share data between them.</p>
<h2>IO.Select</h2>
<p>Another option was using IO.Select (or just select in c) The problem with IO select is that its clunky. Look at the following segment.</p>
<pre><code>r, w = IO.select(http_stream, snmp_stream)
r.each do |stream|
stream.handle_read
end
</code></pre>
<p>You'll see in the example we have a list of streams to listen to and this is a blocking call. You cant do much of anything else since select is a blocking call.</p>
<h2>Event Programming with Node.js</h2>
<p>However node.js greatly simplifies this. For node.js you configure a listener and attach a callback to handle the event. The developer does not need to loop around a polling method or use a blocking method. He just tells the program to listen for events and instead of waiting for the event he gives a function to call when that event occurs.</p>
<p>An example of this is below.</p>
<pre><code>var os = require('os');
var snmp = require('snmpjs');
var http = require('http');
var express = require('express');
var app = express();
var result=[];
//Setup http server code
app.use(express.static('public'));
app.get('/get_today_count', function(req, res) {
console.log(result.length);
res.send(result.length.toString());
});
var httpd = app.listen(3001, function() {
console.log('Listening on port %d', server.address().port);
});
var trapd = snmp.createTrapListener();
trapd.on('trap', function(msg){
result.push(msg);
var now = new Date();
console.log("Trap Received " + now);
console.log(result.length);
});
trapd.bind({family: 'udp4', port: 162});
</code></pre>
<p>In this code you can see that thw listeners have been configured. A snmp trap listener and a httpd listener. Each listener has an inline function defined that will run whenever new data is available for this process. In the following series of posts I will cover: Generating traps, building a trap listener.js, and using Dashboard-js to build a always on dashboard to monitor traps.</p>
<ul>
<li><a href="/2014/08/23/snmp-trap-listener-in-node2">Setting up Linux and building your own traps</a></li>
<li><a href="/2014/08/23/snmp-trap-listener-in-node3">Building a Node.js Listener</a></li>
<li>Building a Dashboard for your traps - coming soon.</li>
</ul>
</content>
</entry>
<entry>
<title>nodejs-as-a-windows-service</title>
<link href="http://www.atlantageek.com/2014/03/28/nodejs-as-a-windows-service/"/>
<updated>2014-03-28T00:00:00-04:00</updated>
<id>http://www.atlantageek.com/2014/03/28/nodejs-as-a-windows-service</id>
<content type="html"><p>To install node.js as a windows service you need three items.
* NSSM http://nssm.cc/download
* node.js for windows . http://nodejs.org/download
* Your node.js application.</p>
<p>Unzip your node.js code in a directory. Let's say its c:\helloworld\hi.js
Install both nssm and node.js for windows. Let's assume that node.js is in c:\Users\atlantageek\node.exe</p>
<p>Run 'nssm install' from the directory you installed nssm. A window will pop up. On my version it looks similar to the following.
*The path will be the path to node.js
* Startup directory will be the path where the js code and modules are located.
* The options will be the actual js appliction file. For my example it will look like the following.</p>
<p><img src="/img/posts/nssm.png" alt="nssm service installer" /></p>
<p>This is the very basic that you need. You can also define Standard In and Standard Out redirection in the I/O tab.</p>
<p>Once the service is created. You can start and stop it from services manager thats built into windows.</p>
</content>
</entry>
<entry>
<title>Cheatsheet for Flot</title>
<link href="http://www.atlantageek.com/2014/01/11/cheatsheet-for-flot/"/>
<updated>2014-01-11T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2014/01/11/cheatsheet-for-flot</id>
<content type="html"><p>$.plot() is the main function</p>
<pre><code>var plot = $.plot(placeholder,data,options)
</code></pre>
<p>Two ways to specifiy div tag</p>
<pre><code>var plot = $.plot("#placeholder",data,options)
</code></pre>
<p>or</p>
<pre><code>var plot = $("#placeholder").plot(data,options)
</code></pre>
<p>Data Format
How to format the data.
data = rawdata or multiple_rawdata or object or multiple objects
The most basic format is an array of arrays.</p>
<pre><code>var rawdata = [ [x1,y1], [x2,y2], [x3,y3], [x4,y4] ];
var rawdata = [ [0,10], [1,11], [2,12], [3,13] ];
</code></pre>
<p>Multiple Series are used if you want more than one line on your graph</p>
<pre><code>multiple_rawdata = [rawdata,rawdata,rawdata];
</code></pre>
<p>The object looks like this:</p>
<pre><code>var object = {
color: color or number
data: rawdata
label: string
lines: specific lines options
bars: specific bars options
points: specific points options
xaxis: number
yaxis: number
clickable: boolean
hoverable: boolean
shadowSize: number
highlightColor: color or number
}
</code></pre>
<p>And multiple objects look like this:</p>
<pre><code>var object = [object1,object2,object3]
</code></pre>
<p>99% of the time you are just using the objects to link to a label</p>
<pre><code>[ { label: "Foo", data: [ [10, 1], [17, -14], [30, 5] ] },
{ label: "Bar", data: [ [11, 13], [19, 11], [30, -7] ] }
]
</code></pre>
<p> Options</p>
<p>Example of options</p>
<pre><code>var options = {
series: {
lines, points, bars: {
show: boolean
lineWidth: number
fill: boolean or number
fillColor: null or color/gradient
}
lines, bars: {
zero: boolean
}
points: {
radius: number
symbol: "circle" or function
}
bars: {
barWidth: number
align: "left", "right" or "center"
horizontal: boolean
}
lines: {
steps: boolean
}
shadowSize: number
highlightColor: color or number
}// Series Option
colors: [ color1, color2, ... ]
legend: {
show: boolean
labelFormatter: null or (fn: string, series object -&gt; string)
labelBoxBorderColor: color
noColumns: number
position: "ne" or "nw" or "se" or "sw"
margin: number of pixels or [x margin, y margin]
backgroundColor: null or color
backgroundOpacity: number between 0 and 1
container: null or jQuery object/DOM element/jQuery expression
sorted: null/false, true, "ascending", "descending", "reverse", or a comparator
},
xaxis, yaxis: {
show: null or true/false
position: "bottom" or "top" or "left" or "right"
mode: null or "time" ("time" requires jquery.flot.time.js plugin)
timezone: null, "browser" or timezone (only makes sense for mode: "time")
color: null or color spec
tickColor: null or color spec
font: null or font spec object
min: null or number
max: null or number
autoscaleMargin: null or number
transform: null or fn: number -&gt; number
inverseTransform: null or fn: number -&gt; number
ticks: null or number or ticks array or (fn: axis -&gt; ticks array)
tickSize: number or array
minTickSize: number or array
tickFormatter: (fn: number, object -&gt; string) or string
tickDecimals: null or number
labelWidth: null or number
labelHeight: null or number
reserveSpace: null or true
tickLength: null or number
alignTicksWithAxis: null or number
}
grid: {
show: boolean
aboveData: boolean
color: color
backgroundColor: color/gradient or null
margin: number or margin object
labelMargin: number
axisMargin: number
markings: array of markings or (fn: axes -&gt; array of markings)
borderWidth: number or object with "top", "right", "bottom" and "left" properties with different widths
borderColor: color or null or object with "top", "right", "bottom" and "left" properties with different colors
minBorderMargin: number or null
clickable: boolean
hoverable: boolean
autoHighlight: boolean
mouseActiveRadius: number
}
interaction: {
redrawOverlayInterval: number or -1
}
margin: {
top: top margin in pixels
left: left margin in pixels
bottom: bottom margin in pixels
right: right margin in pixels
}
markings: [ { xaxis: { from: 0, to: 2 }, yaxis: { from: 10, to: 10 }, color: "#bb0000" }, ... ]
hooks: {
processOptions: function(plot, options)
processRawData: function(plot, series, data, datapoints)
processOffset: function(plot, offset)
drawBackground: function(plot, canvasContext)
drawSeries: function(plot, canvascontext, series)
draw: function(plot, canvascontext)
bindEvents: function(plot, eventHolder)
drawOverlay: function(plot, canvascontext)
shutdown function(plot, eventHolder)
};
};
</code></pre>
<p>Methods</p>
<pre><code>plot.hightlight(series, datapoint_index)
plot.unhighlight(series, datapoint_index)
plot.setData(data) - call draw after to update graph
plot.setupGrid()
plot.draw() - redraw the plot canvas
triggerRedrawOverlay() - force redrawing of overlays
width()/height()
offset() - offset is used in calculating mouse position in graphs
pointOffset({x,y})- dataSpace -&gt; div x,y
resize() -&gt; force canvas to fit size of div
shutdown() -&gt; internal function to disable all event handlers
</code></pre>
<p>debugging functions</p>
<ul>
<li>getData()</li>
<li>getAxes()</li>
<li>getPlaceholder()</li>
<li>getCanvas()</li>
<li>getPlotOffset()</li>
<li>getOptions()</li>
</ul>
</content>
</entry>
<entry>
<title>Casing Up the Raspberry Pi, Part I</title>
<link href="http://www.atlantageek.com/2014/01/08/building-a-case-for-raspberrypi/"/>
<updated>2014-01-08T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2014/01/08/building-a-case-for-raspberrypi</id>
<content type="html"><h1>The Problem</h1>
<p>Raspberry Pi is a great little computer but easily the biggest weakness of the board is its power and/or usb ports. Imagine getting your new raspbery pi, unboxing it, plugging it in with the usb charge you stole from your kid's kindle, plugin your hdmi, keyboard and mouse and ..... nothing. Turns out a keyboard and a mouse can overwhelm your PI, and forget about that wifi adapter or the usb stick with all your rap mp3s. Youre just out of luck.</p>
<p>Eventually you will purchase a powered usb hub so your $35 computer is now $50 + stolen power supply. Eventually your setup may look something like this.</p>
<p><img src="/img/posts/Raspberry-Pi-Nest.jpg" alt="Raspberry Pi Nest" /></p>
<p>Lovely isn't it??!</p>
<p>This gets annoying quickly. After searching for a case that had an embedded usb hub I was disappointed to come up with nothing. There are 30 or more different commercially available cases and not a single one has a built in hub. Theres a <a href="http://www.adafruit.com/blog/2012/10/19/raspberry-pi-case-with-an-integrated-usb-hub-piday-raspberrypi-raspberry_pi/">home built project</a> that looks cool. Or a year old <a href="http://www.raspberrypi.org/phpBB3/viewtopic.php?t=29667">empty promise</a>. So I decided to build my own case with the plan being that its as simple and easily reproducable as possible.</p>
<p>So I've been trying to think of an easy way to build this case. The issue with the raspberry pi is that unlike PC motherboards all ther ports are distributed around the board.</p>
<p><img src="/img/posts/RaspiModelB.png" alt="Raspberry Pi Nest" /></p>
<p>So the cases are built skin tight around the board.</p>
<p><img src="/img/posts/tightcase.jpg" alt="Raspberry Pi Nest" /></p>
<p>I wanted to do something different. Not only did I want a usb hub but I was also inspired by RaspyFi, an excellent project, and wanted to embed the pi in a speaker. So at work's white elephant party I was able to snag this speaker</p>
<p><img src="/img/posts/speaker.jpg" alt="Speaker" /></p>
<p>So Im not a wood worker. I don't have the patience for the detailed work it requires, I never had the coordination for it, I dont want to fill up my garage with thousands of dollars of tools and last of all I like having all my fingers. However I do have a few basic things like a drill and bits.</p>
<p>Looking at this project what I decided I want to do is to pull the ports off the raspberry pi and usb hub and put them on the back of the speaker. So this would require some cabling which isnt a problem. The other issue is I cant cut a square hole to save my life. After thinking about it the solution was obvious.</p>
<p>CE Tech and Leviton both do these media wall ports for in wall cabling</p>
<p><img src="/img/posts/hdmi.jpg" alt="hdmi port" /></p>
<p>They also sell these wall plates.</p>
<p><img src="/img/posts/plate.jpg" alt="wall plate" /></p>
<p>So you can see where Im going with this.</p>
<p><img src="/img/posts/sofar.jpg" alt="So far so good" /></p>
<p>Home Depot did not have the usb ports. I've ordered a couple and when I get those I'll finish the case and post a cost breakdown.</p>
</content>
</entry>
<entry>
<title>Setup Script Login</title>
<link href="http://www.atlantageek.com/2014/01/01/setup-script-login/"/>
<updated>2014-01-01T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2014/01/01/setup-script-login</id>
<content type="html"><p>A recent trend is to build small headless computer systems based on linux. This has come from the ability to run linux on very small computers (in size and power) for very utilitarian tasks (routers, NAS, data collection nodes). However these use cases make it inconvient to connect a keyboard, mouse and monitor to the linux machine. Often its because the device may not have a video out. Or the device's use case does not require a monitor/keyboard to be connected in its normal operations. So when maintenance and/or administrative processes are necessary you need to provide some sort of interface/UI. You can invest hours into developing a cool Web UI that a administrator uses once every 6 months but sometimes a menu driven terminal app could do the job.</p>
<p>So what are the reasons a terminal app would be preferred over a Web UI
* You have administrative duties that need to be kept away from general users.
* You dont want anyone accidently accessing the screen during the appliance's normal use.
* You have tasks that need to be completed before device is no the network (using keyboard/monitor )
* The tasks that are triggered may require a bit of time to complete (doing a backup.)
* You want something easily extendable.
* Your support team is getting tired of describing where the menu/dropdown/button is on the webpage
* Sometimes a simple question/answer text interface is best so the administrator can be lead through the configuration.</p>
<p>This article will describe how to setup a menu driven idiot proof terminal app.</p>
<p>The first thing you need is a setup script. Download this <a href="/references/setup-wrapper.sh">script</a> as an example. When run from the command line the initial screen looks like this.</p>
<pre><code>Thu Jan 2 00:35:02 EST 2014
Welcome to your Admin console, please select an action.
Please select action:
1) Run Setup App
2) Update Application
3) Restart Server
4) Configure Server Time
5) Backup System
80) Change Password
90) exit
ENTER YOUR SELECTION:
</code></pre>
<p>This is fine but you want this to be easy. You dont want the user having to go to a command line, find the script location and then run it. A solution to this is for the script to be started as soon as the user logs into the system from the command line. So first lets create a setup user and configure this script as his login shell.</p>
<pre><code>cp setup-shell.sh /usr/local/bin #Copy shell script to /usr/local/bin
useradd setup -m -s /usr/local/bin/setup-shell.sh #create the setup user
passwd setup #Configure the password for the setup user.
</code></pre>
<p>Now when you login as the setup user you will be given a menu. When you exit out of the script you will be thrown back to the login screen.</p>
<p>The last thing to do is give the setup user some privileges so that it can 'restart a server' or 'configure the server time'. If you refer back to the script you'll see that each command that needs root access is pre-pended with a 'sudo' command. We can list these commands in the sudoers file so that the setup user can only run the necessary commands.</p>
<p>Here is the sudoers file I've created to go with this setup_shell.sh script.</p>
<pre><code>Cmnd_Aliases SETUPCMDS = /usr/bin/setup, /sbin/ifdown, /sbin/ifup, /usr/bin/yum, /sbin/service,/sbin/chkconfig, /bin/date
setup ALL=NOPASSWORD SETUPCMDS
</code></pre>
<p>SETUPCMDS is a list of all the commands setup needs acces to.
the next line says that tue user setup can connect from any server (through telnet/ssh/putty) or the console and run any of the SETUPCMDS commands without typing in the password.</p>
</content>
</entry>
<entry>
<title>Useful Queries to admin postgres</title>
<link href="http://www.atlantageek.com/2013/12/31/useful-queries-to-admin-postgres/"/>
<updated>2013-12-31T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2013/12/31/useful-queries-to-admin-postgres</id>
<content type="html"><p>Here are some of my favorite postgres queries to help administrate postgres.</p>
<h2>Identify slowest running queries.</h2>
<pre><code>SELECT
pid,
current_timestamp - xact_start as xact_runtime,
query
FROM pg_stat_activity
ORDER BY xact_start;
</code></pre>
<h2>Identify existing locks in postgres, good for finding deadlocks.</h2>
<pre><code>SELECT pg_class.relname,pg_locks.*
FROM pg_class,pg_locks
WHERE pg_class.relfilenode=pg_locks.relation;
</code></pre>
<h2>See foreign key constraints.</h2>
<pre><code>SELECT c.constraint_name
, x.table_schema as schema_name
, x.table_name
, x.column_name
, y.table_schema as foreign_schema_name
, y.table_name as foreign_table_name
, y.column_name as foreign_column_name
FROM information_schema.referential_constraints c
join information_schema.key_column_usage x
on x.constraint_name = c.constraint_name
join information_schema.key_column_usage y
on y.ordinal_position = x.position_in_unique_constraint
and y.constraint_name = c.unique_constraint_name
order by c.constraint_name, x.ordinal_position;
</code></pre>
<h2>Show active connections</h2>
<pre><code>SELECT count(*)
FROM pg_stat_activity;
</code></pre>
</content>
</entry>
<entry>
<title>Sharding a MultiTenant SaaS app</title>
<link href="http://www.atlantageek.com/2013/12/30/sharding-a-multitenant-saas/"/>
<updated>2013-12-30T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2013/12/30/sharding-a-multitenant-saas</id>
<content type="html"><p>You've finally got some traction with your SaaS project. Lots of large customers and things are moving along rather well. A few of your customers complain that the system is not as fast as it use to be but that's normal. You up the hardware, add a few indexes and things seem steady.
All of a sudden your email/support staff is overwhelmed by customers complaining about 404 errors. You look and see that your web sessions are timing out and queries are starting to drag. Postgres Autovacuum is starting at the worse possible times and you are spending a lot of time babysitting your app instead of counting your money. You thought you had more time but now its obvious, its time to shard the database.</p>
<p>There are a few ways to shard the database. Some apps are sharded by time and others are sharded by grouping users. A multi-tenant app lends itself to being sharded by tenant. The idea being that each individual tenant would not want to share data with other tenant. (Notice I said tenant and not users. I assume that users who collaborate would be part of the same tenant.</p>
<p>There are also multiple solutions to how data is sharded. There are three approaches.</p>
<ul>
<li>webserver</li>
<li>app</li>
<li>database partitioning</li>
</ul>
<p>Either solution requires several databases/tables for each shard.</p>
<p>Webserver would be assigning a tenant to a specific database/web server combination. This can work but its somewhat ugly. Basically when the user logs ont the app their server needs to be identified and all their work will be done on this server. nginx sticky-module comes in handy for this solution.</p>
<p>App level is a bit more messy. It requires the most code changes. When the customer logs in the database their db shard is determined and then all datarequests for this user goes through the tenant's shard.</p>
<p>Database partitioning approach is probably the best approach. The goal here is to split up the data across multiple tables but with minimal code changes. This is done by a combination of partitioning and postgres db triggers.</p>
<p>The focus will be on the Database partitioning approach.</p>
<p>Unlike other approaches to sharding, the database partitioning approach does not shard the whole database. Instead you identify certain tables that are the culprit in the drop of the performance and then use partitioning on that table. Partitioning is only beneficial for large tables. If the table is larger than the memory of your database machine then its a canidate for partitioning.</p>
<p>To demonstrate a technique assume that we are working on an accounting application. The largest table is the transaction table. The job_transaction table looks like this</p>
<pre><code>CREATE SEQUENCE id_seq;
CREATE TABLE job_transactions (id int NOT NULL DEFAULT nextval('id_seq'), name varchar, description text, tenant_id integer);
ALTER TABLE job_transactions_seq owned by job_transactions.id;
</code></pre>
<p>The job_transactions tabe probably already exists. rename the old table to something else and create a new empty table with the old name. We will transfer the data later.</p>
<p>The data needs to be split accross multiple database tables so its time to create those.</p>
<pre><code>CREATE TABLE job_Transactions0(CHECK ( (tenant_id % 0) = 0)) INHERITS (job_transactions);
CREATE TABLE job_Transactions1(CHECK ( (tenant_id % 1) = 0)) INHERITS (job_transactions);
CREATE TABLE job_Transactions2(CHECK ( (tenant_id % 2) = 0)) INHERITS (job_transactions);
</code></pre>
<p>So the data is split between these three tables. This uses the mod function to determine which partition. To truly leverage this functionality you might want to add tenant_id to all the indexes and all queries should have tenant_id as part of the query.</p>
<p>Now we need to create triggers so that any inserts into the job_transactiosn table will be put in the appropriate partition.</p>
<pre><code> CREATE OR REPLACE FUNCTION trace_insert()
RETURNS TRIGGER AS $$
BEGIN
IF ((NEW.id % 3) = 0 ) THEN
INSERT INTO job_transactions0 VALUES (NEW.*);
ELSEIF ((NEW.id % 3) = 1 ) THEN
IF INSERT INTO job_transactions1 VALUES (NEW.*);
ELSEIF ((NEW.id % 3) = 2 ) THEN
INSERT INTO job_transactions2 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'device_id out of range. Fix the trace_insert_trigger() function!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER trace_insert_trigger
BEFORE INSERT ON trace_refs
FOR EACH ROW EXECUTE PROCEDURE trace_insert();
</code></pre>
<p>Now any inserts into the job_transactions table will be deployed the the child partitions. And querying for data in the original table will cause the code to be requested to the individual transactions.</p>
<p>One note, the queries that have the tenant_id will be fast. Those without must query multiple database tables and kill the whole performance improvements will be killed.</p>
</content>
</entry>
<entry>
<title>Nodejs-and-LCDproc</title>
<link href="http://www.atlantageek.com/2013/12/29/nodejs-add-lcdproc/"/>
<updated>2013-12-29T00:00:00-05:00</updated>
<id>http://www.atlantageek.com/2013/12/29/nodejs-add-lcdproc</id>
<content type="html"><p><img src="/img/posts/IMG_20131230_003453.jpg" alt="LCD display of bitcoin price" /></p>
<p>I developed my first node.js module today. Its called lcdproc-client and provides a client interface for lcdproc. For those not in the know, lcdproc is a tcp server that controls most lcd displays (not the monitors but the blocky text ones you see on media players.)</p>
<p>I wish these were more popular, In my opinion they should be on every desktop PC. On mine I wanted to display bitcoin prices and I wanted to do it in either ruby or node.js. Ruby's library didnt work and I couldnt find one for node.js. I did find one that worked in Perl but I didnt want to write perl code. Plus I had an idea that I might use this for a later raspberry pi project and node.js works well on node.js. I did use the perl code to capture the protocol to figure out how it worked.</p>
<p>Anyway the <a href="https://npmjs.org/package/lcdproc-client">lcdproc-client</a> has been released so others can use lcdproc with node.js. Also if you don't have a lcd screen you can use the curses simulation by installing lcdproc and running the daemon with curses LCD simulation (LCDd -d lis).</p>
<p>Below is the code for the BTC display on the LCD.</p>
<pre><code>var Client = require('node-rest-client').Client;
var LcdClient = require('lcdproc-client').LcdClient;
lc = new LcdClient(13666,'localhost');
function get_bitcoin()
{
console.log("Get Bitcoin");
client = new Client();
client.registerMethod("jsonMethod", "http://blockchain.info/ticker", "GET");
client.methods.jsonMethod(function(data, response){
// parsed response body as js object
var obj = JSON.parse(data);
buy = obj.USD['buy'];
sell = obj.USD['sell'];
lc.widget_val("first_line",1,1,"BTC " );
lc.widget_val("second_line",1,2,"B:" + buy + " S:" + sell);
});
}
lc.on('ready', function() {
console.log("AAA");
console.log("WIDTH: " + lc.width);
console.log("HEIGHT: " + lc.height);
lc.screen("bacon");
lc.widget("first_line");
lc.widget_val("first_line",1,1,"This is a line");
lc.widget("second_line");
lc.widget_val("second_line",1,2,"This is a second line");
get_bitcoin();
setInterval(get_bitcoin, 300000);
});
lc.init();
</code></pre>
</content>
</entry>
<entry>
<title>Redesign of Wi-spy Web interface.</title>
<link href="http://www.atlantageek.com/2013/08/17/redesign-wi-spy-web-interface/"/>
<updated>2013-08-17T22:46:21-04:00</updated>
<id>http://www.atlantageek.com/2013/08/17/redesign-wi-spy-web-interface</id>
<content type="html"><p>So originally I wanted to build a web inteface for the wi-spy spectrum analyzer. I proved the concept with the first version. You can watch the <a href="http://www.youtube.com/watch?v=jj9u6VtkM3Y">short video</a>
yourself that goes over the design. The biggest problem with the application was that it required a lot of infrastructure. It required a special verion of spec_Tool to collect the data, node.js to supply the web server and a redis daemon to act inbetween. When I started adding code to store off historical data I realized I'd need another technology such as tokyo cabinet and the like. This proved even more difficult because I need a locking mechanism between the reader and the writer.</p>
<p>Finally I decided that it was time to get to the basics. The latest version of this tool is written totally in C (spectool_red) htat will collect data, supply the web interface and store the historical data in tokyo cabinet. This is really only one executable. More details can be found <a href="https://github.com/atlantageek/websocketsa">here</a></p>
</content>
</entry>
<entry>
<title>DVRs for over the air TV</title>
<link href="http://www.atlantageek.com/2013/05/12/dvrs-for-over-the-air-tv/"/>
<updated>2013-05-12T22:46:21-04:00</updated>
<id>http://www.atlantageek.com/2013/05/12/dvrs-for-over-the-air-tv</id>
<content type="html"><h1>DVRs for over the air TV</h1>
<p>One of the advantages of cable service is something that is totally unrelated to their content. The settop box is a great convience. The emergence of MSO provided DVRs has added a lot of value to their service. So if you really do want to cut the cord its time to consider how are you going to replace that $10/month leased settop box.</p>
<p>There are a couple of options. The grand-daddy of them all is Tivo (Yes they’re still in business.) Tivo and ReplayTV(and they are not) invented the DVR business. However Tivo is notorious for being expensive. Not just for the initial price ($149-349) but also the monthly charge $15/month. You can also get a lifetime account for $500. (Thats a 3 year payback).<br/>
Used Tivos with lifetime memberships can be a good deal off of ebay. You can pickup a series 2 Tivo (OK 8 years old) with a lifetime account for $100. But it should work well. The newer Tivos do more than TV though. They do netflix, amazon and other stuff thats kind of cool if you dont have a smart tv or smart dvd player. But The monthly fee is what you were trying to get away from when you cut the cable cord. Tivo really needs to rethink their target market.</p>
<table>
<thead>
<tr>
<th>DVR</th>
<th>Size</th>
<th>Price </th>
<th>Tuner Count </th>
<th> Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>BriteView 980H</td>
<td> 320GB</td>
<td>$163.50</td>
<td>?</td>
<td>Buggy</td>
</tr>
<tr>
<td>ChannelMaster DTV7400</td>
<td> 320GB</td>
<td>$240</td>
<td>1</td>
<td>Seems solid, middle of road quality</td>
</tr>
<tr>
<td>Digital Stream DHP1000R</td>
<td> 320GB </td>
<td> $240 </td>
<td> 1 </td>
<td> buggy</td>