the Farmer…

I’ve started writing a new game. I switched from my second version of ‘Pachinko Fever’ to this ‘pseudo’ RPG style game. We’ll see how it goes. I have the game-play in mind… but its not finalized. I’m making sure that it’ll be interesting and addictive. There is a way to level up and fashion (learn) new ‘features’ in the game easily enough. I’ll go over the game-play more as it solidifies a bit. The core concept is you don’t fight, but you create other creatures to fight for you.

I’m using tiled to create tile maps… and I’m basing the landsacpe on my old house in Wisconsin and the land around it. I had to pick somewhere so I figure’d I’d pick where I know. Its been fun so far to model the environment, both the real and the fantastical. This will take time to develop. LibGDX is the running engine, and gimp for graphics. (More on that later)

The first ‘zone’ is free. Unlocking the game via in-app purchase opens the other ones. First place is home and Osti… what we called the land my home sat on. Troy Village and Troy is on the second zone. The river, Arena, Mazomanie follows next. My goal is to get Osti up first. If I can do that by summer I’ll be a happy camper. Its a Lovecraftian theme but you may not notice that when it plays out. Least I’m hoping that it will be subtle. I’m listening strongly to Extra Credits on this one.

The code for this game isn’t too hard. The real work is going to be in the story and graphics. I’m creating the basic framework now for the game while doodling bits and pieces. Its pixel-art style graphics. That’s important because what I found is that doing pixel-art is easier for me then trying a cartoonish-style, or heaven-help me, realistic. And its simple enough using gimp to handle these graphics. I’m sure my design is going to be problematic to some, but I’m using this as a learning process too. I’m reading more and more about art design in games and looking for common pitfalls.

Matias Ergo Pro keyboard…

This review of the Ergo Pro keyboard from Matias is more then a thinly veiled exercise I dreamed up to test the keyboard… but not much.

The keyboard was delivered today from Amazon. Had to replace a failing ‘natural’ ergonomic keyboard from Microsoft that I’ve typically use on all my computers over the years. I’ve tried different ergonomic keyboards, but always end up with that one. I like mechanical keyboards, but you tend not to find them in an ergonomic styling. The Kinesis Advantage Pro is mechanical, but just a bit too wacky to use everyday. It’d be fine if every keyboard I touch was that one, but switching between keyboards would make that painful… and its too expensive to have everywhere.

So, Aaron at work got the Ergo Pro. It has ‘mechanical’ switches, but quieter then regular switches. Tried it and liked it for the most part. At $200, it was at a price point where if I needed a new keyboard I’d consider it, but too expensive just to buy outright.

Then the Microsoft keyboard on my main desktop broke. Opened up the Amazon app, and 10 seconds later the Matias was on its way. Its a truly split keyboard. Two half make up the keyboard, and you can separate them by any distance you want. The keys are mechanical, but as I mention, they went to lengths to make it quieter then other keys. They are like the MX Cherry Red keys, but a bit softer. Number pad is overlaid on the keyboard, so you have to press the function key and hit ‘U’ for a 4, ‘I’ for a 5… etc. That part isn’t great, but doesn’t bother me much.

Like any modern USB keyboard, it hooked up fine. OS be damned… And I find it fairly easy to get comfortable with. I’m not at the same speed I’d expect with my old keyboard, but I don’t think that will take long to get back to. The biggest issue so far is the Control key next to the ‘N’ on the right hand side. I keep hitting it when I mean to hit a ‘N’… but I think that’ll change as I get used to it.

I suppose the other issue is the height of the keyboard when you add the stands. You have three ways to set up the keyboard physically. Flat, inverse tilt or tented. Flat is exactly as it sounds… just straight on the desk. Inverse tile raises the front of the keyboard where the pads are, which is what I typically do. This puts your hands in a fairly comfortable position when typing for an extending time. Tenting is where the keyboard is lifted in the middle, and the edges are table-height. I’m using this now and I find it much better then the ‘inverse tilt.’ The reason I call this an issue is that the height could be taller. But so far its good enough for me.

So, five hundred words later, and I find that the keyboard is doing just fine. I’m still accidentally opening up new windows via the cntl-n I keep hitting, but it’s better now then at the start of this post. I’m completely enjoying the keyboard. Now it’s just a matter of getting work to buy me one for the office.

So… I’m impressed with the Nexus 6P

Recently I’ve been big into trying to figure out how to upgrade the sound coming from my audio system. I’ve looked at tube amps to make proper use of my turntable. (Yeah, that’s right… vinyl.) But I also have a ton of music as mp3 and flac files. I’ve looked at the monoprice tube amp which has a ton of good reviews. Trying to find the best way to morph it into my current hardware collection…

But I’ve noticed that some disconnected components I have are ridiculously good. First, the Amazon ‘echo’. Its pretty much the best bluetooth speaker you can get that also talks back to you. Its not as good as my Samson BT3, but it has more features then it. The quality from this single tower of speakers is nice if not slightly limited. I played a bunch of old music tonight from my mp3 files via Google music and it sounded nice. Easy enough to do… sounds ‘good enough’. Definitely worth the $100 when it first came out. I can’t complain.

Then I played from my Nexus 6p.

Digression… I really do love this phone. Considering the hell I’ve been through with the Nexus 9, its nice that the 6p has been so great since I got it. In every aspect this phone has performed well, better then any other device of this nature that I owned… and surely better then I expected by a long shot. My Diamond Rio Karma still had the best music playlist implementation of any MP3 player… I even had devices that predated that dinosaur (MP3 Man)… Palm Pre, HTC and Samsung phones galore. While still holding on to my Cowon for the music quality while spending cash on these… well, phones… where music was secondary at best.

But the audio quality of this 6p… the clarity… its speakers…

I’m a nut job. I’m the first to admit it. I grew up in a house-hold of Bang and Olufsen… overpriced but sounds great. Cerwin Vega speakers was my own ‘low-end’ system until I got something real. I never did spend what a quality system needed… I’m a cheap-skate nut job I suppose.

But I played Peter Gabriel on the Nexus 6p from the YouTube ‘Red’ music app (Don’t Give Up with Kate Bush) and seriously was moved. I’ve not heard this quality in a long time. No expensive head phones… No highly specialized amp… just the (mostly) regular music app provided by Google. Just damn. Speakers didn’t buzz or sound over-powered. Just filled the space with music that I hadn’t heard in years.

The Nexus 6p is a quality music playing device.

Flour, rice and potato

I lost twelve pounds. I have eight to go. I stopped eating flour, rice and potato.

When I was a vegetarian, flour, rice and potato was mostly my diet. Tastes good, but you never really feel full. A while ago my doctor was concerned that my triglycerides were too high, and surprisingly, my cholesterol was too. Overweight and had high blood pressure. He started talking about needing medicine to control it.

I kinda snapped.

I changed my diet.1500 calories a day. No flour, rice or potatoes. Walk 2.5+ miles a day, five days a week. I monitor my blood pressure with an online-enabled blood pressure cuff. Tracking my caloric intake… weight with a scale that narcs on me. Its been working.

Now for the real trick. I have to normalize my diet.

I cannot eat 1500 calories a day forever. I have to adjust it to the proper amount considering my height, desired weight and activity. When I have lost 20 pounds, I need to be eating the right amount for that weight. I have to keep tracking the calories I consume. Add minor weight-training to keep my muscles working… and quite frankly, get to a point where I’m doing this all naturally… as if it’s just second nature.

Just one question… what do I do with flour, rice and potatoes? When I’m starting to increase my caloric intake to a normal level, where do those go? Potatoes are too easy to overdo… I’ll avoid them. Rice… well, maybe I’ll limit that to sushi. Flour? Bread? All my recipes… can I really re-introduce that again? Flour will be a real challenge.

Final Notes from the 2015 Cassandra Summit

This is my last post about the 2015 Cassandra Summit. This is mostly a list of random details that I wanted to keep track of. Most people may not find this useful.

DataStax 4.8 will have better ‘encryption at rest’ then previous DataStax versions. There are other providers then DataStax that should be looked at too. Note that you should use eCryptFS to encrypt the commit-log file since its typically not encrypted at rest.

Slide reviews for getting encryption right can be found at Nate’s Slide Share site. He goes over a ton including what’s wrong with how DataStax documents how to install node-node and client-node encryption, and how to do it right.

Vnodes… 256 could be high. Cassandra 3.0 will start doing 64 vnodes per physical server instead. Do not mix single token nodes and vnodes in the same datacenter. To get a mix in the same cluster, use two datacenters. Solr/Lucene and Spark currently want single-token nodes, but that’s changing.

Java drivers should have token-aware policies enabled. No load-balancer between clients and datacenter cluster. Seriously, your load balancer will do all the wrong things.

When developing code, use local consistency levels even if you have just one data center.  Also, you only think you need immediate consistency. When possible, use LOCAL_ONE for both reads and writes. (And don’t mistakenly use SimpleStrategy in production.)

Dropping keyspaces does not remove data from disk. (Snapshots) Remember this QA folks and for integration tests.

In general, secondary indexes are useless with the following caveats. If a partition has a ton of values, a secondary index is useful provided you also provide the partition key. Spark Integration can actually benefit from secondary indexes with the DSE install, as each Spark instance will talk to their local Cassandra node.

Low values in commitlog_total_space_in_mb will reduce the number of memtables in memory. So you may need to up that number. 4G may be appropriate. There is a direct correlation between heap size and commit log size.

Compaction can be tuned to start when sstables count is between 4 and 32 sstables per memtable. Less SSTables on disk makes reads faster, but compaction causes high io… so… yeah.

Memtables are HashMaps of array lists (currently)

Remember, if you enable RowCache in your table, the cassandra.yaml file needs to have it enabled too. (Each node)

LevelCompaction strategy should only be used with SSDs. You don’t really need the commit log on SSDs, even if your SSTables are.

Do not manually compact. If you do, you will have to forever. Also, if you change the compaction strategy, the next compaction will be huge. So just don’t.

Cassandra is CPU bound for writes, and uses memory for reads. 16G-64G ram is recommended even if the heap size is only 8G. Disk caching in linux gets the rest of them memory, which helps you out a ton.

Cassandra sweet spot is 8 cores. More i you have Spark/Solr with Cassandra on the same box.

Sized compaction needs 50% of disk free. Level compaction needs 10% free. SSDs give you 3-5t/node, with rotation drives, 1t/node. Be careful if you go as high as 20T/node… rebuilds will suck, as much as your admins life will.

Expect nodes to be added. Single-token nodes you’ll have to double them up. Vnodes you can just add them one at a time.

Use nodetool cleanup after you add nodes to the cluster or decrease replication factor. That will clean up disk space. Its an optimized compaction. If you wait, it’ll clean up itself.

Run repair weekly in 2.1. Looks like that will change in 3.0. Run repairs on a few nodes at a time to reduce overhead. Also, use the ‘pr’ setting so you’re not repairing too much. (Should have been the default)  ‘pr’ means only repair data it owns, not data from other nodes. Repairing the data you own will also cause repairs on other nodes… so, yeah.

Always use prepared statements. Always. If you are not, you’re doing something wrong. (Reduces load)

async queries are better, but more complicated.

Batch queries should stick in the same partition key for performance gain.

Cassandra/Lucene plugin that is recommended outside of DataStax: cassandra-lucene-index by Stratio.

 

Cassandra Summit: Conference Sessions

The Cassandra summit that DataStax hosted this year had just shy of 140 sessions over two days. Each session was grouped into tracks such as operations, development and architecture. They had a half-way decent app built by Double Dutch that provided a way to schedule which sessions you wanted to see.  The app worked well, and provided a few ‘games’ mostly designed to get you to visit the vendors.

The sessions were divided into 3 groups. The first group was geared towards managers on what people did or how to integrate Cassandra into your company. Typically these were fairly useless. The ones I accidently attended were fairly useless. There was a session defined as ‘hands on’ that was a overview of technologies installed.

The second group of sessions were technical deep-dives. A fairly crowded one consisted of folks from The Last Pickle going over the source code for how data is deleted in Cassandra. Extremely useful as it shows why certain behaviors within Cassandra exist, and guided you into programming with those behaviors in mind. There could have been more of these types of sessions and in bigger rooms. I had to sit on the floor for one of them even with my priority pass.

The third group was a best practices or “Hey, this is what worked for us.”  The tech head from the Weather Group did a great presentation about their attempts to scale up their ability to process incoming datasets… showing what they tried first that failed, and what actually worked.

A few notes from the summit: Spark is everywhere. People seem to be using Spark with Cassandra for any type of analytics or reporting. Also, Zeppelin has been getting a lot of mention too. Its a electronic notebook to create and share Spark ‘recipes’ in the same way you can a RStudio project… perfect for folks in data analytics or just looking for a quick way to visualize data in Cassandra. I need to install both Spark and Zeppelin and see what I can do there.

Cassandra Summit: Training and Certification

This last week I went to the Cassandra summit that Datastax had put on. The first day was training and certification, and the following two day was the conference itself. I had been playing with Cassandra for years, though nothing major, and certainly nothing in production yet.

The training itself was six days worth of material supplied within six hours. Datastax has training online and to a large degree, the session that day before the test was intended to be a review. You were supposed to take two classes online before the training; each class had about 3 hours of video and quizzes to go over. But many of the folks who were at the training never even looked at the site. So DataStax tried to cram tons of knowledge into everyone’s eye-socket in those six hours.

Honestly, I didn’t care about the certification, the training was more important to me. Hands-on usage of Cassandra is the only certification that’s really important here. If you don’t use Cassandra after getting your certification, then all that information you gained is likely lost within a few months of the course at best. If instead you set up a few nodes and tried to store/retrieve data from them after the online training, then you’d likely have the same level of knowledge as someone who passed the certification. Each of you will have some tidbits of information that help keeps that cluster alive.

I’m glad I went for the training. Having the certification is a nice ‘feature’… but it’ll actually mean something once we have Cassandra in production.