Wednesday, 25 May 2016

Unconventional Logging: Game Development and Uncovering User Behaviour






by Sarang Nagmote



Category - Website Development
More Information & Updates Available at: http://insightanalytics.co.in




Not long ago, I was given the chance to speak with Loggly’s Sven Dummer about the importance of logging for game development. However, I got more a lot more than just that… Sven actually gave me a comprehensive tour of Loggly via screenshare, telling me a bit about the basics of logging—its purpose and how it’s done—and what particular tools Loggly offers up to make it easier for those trying to sort through and make sense of the endless haystack of data that logs serve up. And after my crash course in logging and Loggly, Sven did indeed deliver a special use-case for logging that was particular to game development, though I think it can be applied creatively elsewhere, as well.
First off, let me recap a bit of information that Sven provided about logging and log management. If you want to skip ahead to the game dev use-case, feel free.

Crash Course: My Experience With Logging and Loggly

Upon sharing his screen with me, Sven first took me to the command line of his Mac to illustrate just how much information logging generates. He entered a command revealing a list of all the processes currently happening on his laptop and, as you’ve probably already guessed, there was a lot of information to show. Data spat out onto the page in chunks, and I quickly became overwhelmed by the velocity and disorganization of words and numbers perpetually scrolling onto the screen. This information—some of it obviously very useful to those that know what they’re looking for—was delivered piece by piece very quickly. The format of the data was “pretty cryptic to people like you and me” as Sven put it, but what we were looking at was comparatively easy compared to the data formats of some logs.
And that’s just it, there is no standard format for log data. It can come in a variety of file types and is displayed differently depending on the type. In Sven’s words:
“Every application or component typically writes its own log data in its own log file, and there is no one standardized format, so these log files can look very different. So, if you want to make sense of the data, you have to be somewhat familiar with the formats.”
And continuing on to explain how this can become even more difficult to manage when pulling data from a multitude of sources, Sven gave this example:
“Let’s imagine you’re running a large complex web application… you’re in a business that’s running a webstore. In that case, you might have a very complicated setup with a couple of databases, a web server, a Java application doing some of your business logic—so you have multiple servers with multiple components, which all do something that basically makes up your application. And so, if something goes wrong, then the best way to trace things down is in the log data. But you have all these different components generating different log files in different formats. And, if your application is somewhat complex and you have 25 different servers, they all write the log data locally to the hard drive so you can imagine that troubleshooting that can become quite difficult.”
He continued on to explain how a log management tool like Loggly can gather together these many different logs (it supports parsing of many different formats out of the box) and display them in a unified format—not only to make the information more presentable, but also to securely provide access to an entire team:
“Let’s say you have an operations team and these folks are tasked with making sure that your system is up and running. If they were supposed to look at the log data on all these individual servers and on all these components, they would have to know how to: 1. log into those servers, 2. be able to reach the servers inside your network, 3. have credentials there, 4. know where the log files reside; and then, they would still be looking at all of these individual log files without getting the big picture.
However, instead, they could send all of their logs to Loggly, locating them in one place. This not only allows for a cohesive picture of all the logs that make up a web application [for example], but it also removes the need for everyone on your operations team to log into every single server in order to get access to the logs, which is important from a security perspective. Because, you don’t necessarily want everybody to be able to have administrative privileges to all these different servers.”
At this point, Sven launched Loggly and it was a complete sea-change. Rather than a black terminal window overflowing with indistinguishable blobs of text, the interface proved to be much more organized and user-friendly. With Loggly, Sven showed me how to search for particular logs, filter out unwanted messages, drill down to a specific event and grab surrounding logs, and display information in one standardized flow, so that it was much easier for the viewer to sort, scan, and find what he/she is after. He also pointed out how one might automate the system to track and deliver specific error messages (or other information) to corresponding team members who that information is best suited for. Through Loggly’s available integrations, one might have this information delivered via a specific medium, like Slack or Hipchat, so that team members receive these notifications in real time and can act in the moment. Honestly, there were so many features available that Sven showed to me, I don’t think I can cover them all in this post—if you want to see more about the features, take a look around this page for a while and explore the tutorial section.
Image title
Loggly integrated with Hipchat.

One thing I remember saying to Sven is that the command line view of logs looks a bit like the familiar green-tinged code lines that spastically scatter across the monitors in The Matrix, endlessly ticking out information on and on and on… and on. He was quick to point out that Loggly still provided a command line-esque views of logs via their Live Tail feature, but with more control. I highly recommend checking it out.
Image title
What logs look like to the untrained eye.
Image title
Loggly Live Tail in action... running in an OS X Terminal and on Windows PowerShell.

The Importance of Logging for Game Development

So, typically one might use logging to discover performance bottlenecks, disruptions in a system, or various other disturbances which can be improved after some careful detective work. However, when looked at through another lens, logs can offer up much more interesting information, revealing user behavior patterns that can be analyzed to improve a game’s design and creative direction. Let me take you through a couple of examples that Sven offered up which shed light on how one might use logs to uncover interesting conclusions.
Sven launched a Star Fox-esque flying game in his browser (a game written in Unity, a format which Loggly supports out of the box) and began guiding his spaceship through rings that were floating in the air. It was pretty basic, the point being to make it through each ring without crashing into the edges.
Image title
This is an image of Logglys demo game... not Star Fox!

While flying, he opened up Live Tail and showed me the logs coming in at near real-time (he explained there was a very small network delay). Back in the game, he began switching camera angles, and I could see a corresponding log event every time he triggered the command. This is where it gets interesting…
“The camera changes are being recorded in the log and I can see them here. Now, this is interesting because it will also tell me from which IP address they’re coming. And this is just a very simple demo game, but I could also log the ID of the user for example and many more things to create a user behaviour profile. And, that is very interesting because it helps me to improve my game based on what users do.
For example, let’s say I find out that users always change the camera angle right before they fly through ring five, and perhaps a lot of users fail when they fly through this ring. And, maybe that’s a source of customer frustration… maybe they bounce the site when they reach that point. And maybe then, I realize that people are changing the camera because there’s a cloud in the way that blocks the view of ring five. Then I can tell my design team you know, maybe we should take the cloud out at this point. Or, we can tell the creative team to redesign the cloud and make it transparent. So, now we’re getting into a completely different area other than just IT operations here. When you can track user behavior you can use it to improve things like visuals and design in the game.”
Image title
Logs gathered from the change in camera angles within the demo game.

I found this idea fascinating, and Sven continued on describing a conversation he had with unnamed Mobile game publisher who recently used Loggly in a similar way…
“We showed this demo at GDC and there was actually somebody who visited our booth who I had a very interesting conversation with on this topic. Basically, they told me that they develop mobile games for smart phones and had plans to introduce a new character in one of their games for gamers to interact with. Their creative team had come up with 6 or 7 different visuals for characters, so their idea was to do simple A/B testing and find out which of these characters resonated best with their users through gathering and studying their logs in Loggly. Then they planned on scrapping the character models that didn’t do well.
However, when they did their AB testing they got a result that they were not at all prepared for. There was no distinctive winner, but the regional differences were huge. So, people in Europe vs. Asia vs. America—there was no one winner, but rather there were clear winners by region. So, that was something that they were not expecting nor prepared for. And they told me that they actually reprioritized their road map and the work that they had planned for their development team and decided to redesign the architecture of their game so that it would support serving different players different characters based on the region that they were in. They realized, they could be significantly more successful and have a bigger gaming audience if they designed it this way.”
Once again, I found this extremely interesting. The idea of using logs creatively to uncover user behavior was something completely novel to me. But apparently, user behavior is not the only type of behavior that can be uncovered.
“Just recently there was an article on Search Engine Land where an SEO expert explained how to answer some major questions about search engine optimization by using Loggly and log data. Again, a completely different area where someone is analyzing not user behavior but, in this case, search engine robots—when do they come to the website, are they blocked by something, do I see activity on webpages where I don’t want these search robots to be active? So, I think you get the idea, really these logs can be gathered and used for any sort of analysis.”
And, that’s basically the idea. By pulling in such vast amounts of data, each offering its own clues as to what is going on in an application, log management tools like Loggly act as kind of magnifying glass for uncovering meaningful conclusions. And what does meaningful mean? Well, it’s not limited to performance bottlenecks and operational business concerns, but can actually provide genuine insights into user behavior or creative decision-making based on analytics.

Swiftenv: Swift Version Manager






by Sarang Nagmote



Category - Mobile Apps Development
More Information & Updates Available at: http://insightanalytics.co.in




Swift 3 development is so fast at the moment, that a new development snapshot is coming out every couple of weeks. To manage this, Kyle Fuller has rather helpfully written swiftenv which works on both OS X and Linux.
Once installed, usage is really simple. To install a new snapshot:

swiftenv install {version}

Where {version} is something like: DEVELOPMENT-SNAPSHOT-2016-05-09-a, though you can also use the full URL from the swift.org download page.
The really useful feature of swiftenv is that you can set the swift version on per-project basis. As change is so fast, projects are usually a version or so behind. e.g. at the time of writing, Kituras current release (0.12.0) works withDEVELOPMENT-SNAPSHOT-2016-04-25-a.
We register a project specific Swift version using:

swiftenv local {version}
i.e. For Kitura 0.12: swiftenv local DEVELOPMENT-SNAPSHOT-2016-04-25-a
Nice and easy!

Effective and Faster Debugging With Conditional Breakpoints






by Sarang Nagmote



Category - Developer
More Information & Updates Available at: http://insightanalytics.co.in




In order to find and resolve defects that prevent correct operation of our code, we mostly use debugging process. Through this process, in a sense, we "tease out" the code and observe the values of variables in runtime. Sometimes, this life-saving process can be time consuming.
Today, most IDEs and even browsers make debugging possible. With the effective use of these tools we can make the debugging process faster and easier.
Below, I want to share some methods that help us make the debugging process fast and effective. You will see Eclipse IDE and Chrome browser samples, but you can implement these methods to other IDEs and browsers, too.
In order to debug our Java code in Eclipse, we put a breakpoint to the line which we want to observe:
Image title
When we run our code in debug mode, the execution of code suspends in every iteration of line which we put breakpoint on. We can also observe the instant values of variables, when the exceution suspends.
Image title
When we know the reason of a defect, instead of observing the instant values of variables in every iteration, we can just specify the condition in the properties of breakpoint. This makes the execution suspended only when the condition  is met. By this way, we can observe the condition we expected quickly:
Image title
Image title
Image title
With the help of this property, we can even run any code when the execution passes the breakpoint, without suspending the execution.
Image title
We can also, change the instant value of variables when the execution passes the breakpoint. So, we can prevent the case which makes a code throw exception.
Image title
With the help of this property, we can also throw any exception from the breakpoint. By this way, it is possible to observe the handling of a rare exception.
Image title
It is possible to debug this in Chrome, too. This time, we will debug our Javascript code. To do this, we can press F12 or open the "Sources" menu under Tools>Developer Tools menu and select the code which we want to debug and add breakpoint. After that, we can also specify a condition to suspend the execution only when the condition meets.
Image title

Fixing MySQL Scalability Problems With ProxySQL or Thread Pool






by Sarang Nagmote



Category - Databases
More Information & Updates Available at: http://insightanalytics.co.in




In this blog post, we’ll discuss fixing MySQL scalability problems using either ProxySQL or thread pool.
In the previous post, I showed that even MySQL 5.7 in read-write workloads is not able to maintain throughput. Oracle’s recommendation to play black magic with innodb_thread_concurrency and innodb_spin_wait_delay doesn’t always help. We need a different solution to deal with this scaling problem.
All the conditions are the same as in my previous run, but I will use:
  • ProxySQL limited to 200 connections to MySQL. ProxySQL has a capability to multiplex incoming connections; with this setting, even with 1000 connections to the proxy, it will maintain only 200 connections to MySQL.
  • Percona Server with enabled thread pool, and a thread pool size of 64
You can see final results here:
Fixing MySQL scalability problems
There are good and bad sides for both solutions. With ProxySQL, there is a visible overhead on lower numbers of threads, but it keeps very stable throughput after 200 threads.
With Percona Server thread pool, there is little-to-no overhead if the number of threads is less than thread pool size, but after 200 threads it falls behind ProxySQL.
Here is the chart with response times:
I would say the correct solution depends on your setup:
  • If you already use or plan to use ProxySQL, you may use it to prevent MySQL from saturation
  • If you use Percona Server, you might consider trying to adjust the thread pool

Summary

Advanced Metrics Visualization Dashboarding With Apache Ambari






by Sarang Nagmote



Category - Data Analysis
More Information & Updates Available at: http://insightanalytics.co.in




At Hortonworks, we work with hundreds of enterprises to ensure they get the most out of Apache Hadoop and the Hortonworks Data Platform. A critical part of making that possible is ensuring operators can quickly identify the root cause if something goes wrong.
A few weeks ago, we presented our vision for Streamlining Apache Hadoop Operations, and today we are pleased to announce the availability of Apache Ambari 2.2.2 which delivers on the first phase of this journey. With this latest update to Ambari, we can now put the most critical operational metrics in the hands of operators allowing them to:
  • Gain a better understanding of cluster health and performance metrics through advanced visualizations and pre-built dashboards.
  • Isolate critical metrics for core cluster services such as HDFS, YARN, and HBase.
  • Reduce time to troubleshoot problems, and improve the level of service for cluster tenants.

Advanced Metrics Visualization and Dashboarding

Apache Hadoop components produce a lot of metric data, and the Ambari Metrics System (introduced about a year ago as part of Ambari 2.0) provides a scalable low-latency storage system for capturing those metrics.  Understanding which metrics to look at and why takes experience and knowledge.  To help simplify this process, and be more prescriptive in choosing the right metrics to review when problems arise, Grafana (a leading graph and dashboard builder for visualizing time-series metrics) is now included with Ambari Metrics. The integration of Grafana with Ambari brings the most important metrics front-and-center. As we continue to streamline the operational experiences of HDP, operational metrics play a key role, and Grafana provides a unique value to our operators.

How It Works

Grafana is deployed, managed and pre-configured to work with the Ambari Metrics service. We are including a curated set dashboards for core HDP components, giving operators at-a-glance views of the same metrics Hortonworks Support & Engineering review when helping customers troubleshoot complex issues.
Metrics displayed on each dashboard can be filtered by time, component, and contextual information (YARN queues for example) to provide greater flexibility, granularity and context.
Ambari_Grafana

Download Ambari and Get Started

We look forward to your feedback on this phase of our journey and encourage you to visit the Hortonworks Documentation site for information on how to download and get started with Ambari. Stay tuned for updates as we continue on the journey to streamline Apache Hadoop operations.

Tuesday, 24 May 2016

An Intro to Encryption in Python 3






by Sarang Nagmote



Category - Website Development
More Information & Updates Available at: http://insightanalytics.co.in




Python 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries.

Hashing

If you need secure hashes or message digest algorithms, then Python’s standard library has you covered in the hashlib module. It includes the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSA’s MD5 algorithm. Python also supports the adler32 and crc32 hash functions, but those are in the zlib module.
One of the most popular uses of hashes is storing the hash of a password instead of the password itself. Of course, the hash has to be a good one or it can be decrypted. Another popular use case for hashes is to hash a file and then send the file and its hash separately. Then the person receiving the file can run a hash on the file to see if it matches the hash that was sent. If it does, then that means no one has changed the file in transit.
Let’s try creating an md5 hash:
>>> import hashlib>>> md5 = hashlib.md5()>>> md5.update(Python rocks!)Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> md5.update(Python rocks!)TypeError: Unicode-objects must be encoded before hashing>>> md5.update(bPython rocks!)>>> md5.digest()b‚ì#döN}*+[ôw
Let’s take a moment to break this down a bit. First off, we import hashlib and then we create an instance of an md5 HASH object. Next, we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s digest method to get our hash. If you prefer the hex digest, we can do that too:
>>> md5.hexdigest()1482ec1b2364f64e7d162a2b5b16f477
There’s actually a shortcut method of creating a hash, so we’ll look at that next when we create our sha512 hash:
>>> sha = hashlib.sha1(bHello Python).hexdigest()>>> sha422fbfbc67fe17c86642c5eaaa48f8b670cbed1b
As you can see, we can create our hash instance and call its digest method at the same time. Then we print out the hash to see what it is. I chose to use the sha1 hash as it has a nice short hash that will fit the page better. But it’s also less secure, so feel free to try one of the others.

Key Derivation

Python has pretty limited support for key derivation built into the standard library. In fact, the only method that hashlib provides is the pbkdf2_hmac method, which is the PKCS#5 password-based key derivation function 2. It uses HMAC as its psuedorandom function. You might use something like this for hashing your password as it supports a salt and iterations. For example, if you were to use SHA-256 you would need a salt of at least 16 bytes and a minimum of 100,000 iterations.
As a quick aside, a salt is just random data that you use as additional input into your hash to make it harder to “unhash” your password. Basically it protects your password from dictionary attacks and pre-computed rainbow tables.
Let’s look at a simple example:
>>> import binascii>>> dk = hashlib.pbkdf2_hmac(hash_name=sha256, password=bbad_password34, salt=bbad_salt, iterations=100000)>>> binascii.hexlify(dk)b6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02
Here we create a SHA256 hash on a password using a lousy salt but with 100,000 iterations. Of course, SHA is not actually recommended for creating keys of passwords. Instead you should use something like scrypt instead. Another good option would be the 3rd party package, bcrypt. It is designed specifically with password hashing in mind.

PyCryptodome

The PyCrypto package is probably the most well known 3rd party cryptography package for Python. Sadly PyCrypto’s development stopping in 2012. Others have continued to release the latest version of PyCryto so you can still get it for Python 3.5 if you don’t mind using a 3rd party’s binary. For example, I found some binary Python 3.5 wheels for PyCrypto on Github (https://github.com/sfbahr/PyCrypto-Wheels).
Fortunately, there is a fork of the project called PyCrytodome that is a drop-in replacement for PyCrypto. To install it for Linux, you can use the following pip command:

 pip install pycryptodome
 
Windows is a bit different:

 pip install pycryptodomex
 
If you run into issues, it’s probably because you don’t have the right dependencies installed or you need a compiler for Windows. Check out the PyCryptodome website for additional installation help or to contact support.
Also worth noting is that PyCryptodome has many enhancements over the last version of PyCrypto. It is well worth your time to visit their home page and see what new features exist.

Encrypting a String

Once you’re done checking their website out, we can move on to some examples. For our first trick, we’ll use DES to encrypt a string:
>>> from Crypto.Cipher import DES>>> key = abcdefgh>>> def pad(text): while len(text) % 8 != 0: text += return text>>> des = DES.new(key, DES.MODE_ECB)>>> text = Python rocks!>>> padded_text = pad(text)>>> encrypted_text = des.encrypt(text)Traceback (most recent call last): File "<pyshell#35>", line 1, in <module> encrypted_text = des.encrypt(text) File "C:ProgramsPythonPython35-32libsite-packagesCryptoCipherlockalgo.py", line 244, in encrypt return self._cipher.encrypt(plaintext)ValueError: Input strings must be a multiple of 8 in length>>> encrypted_text = des.encrypt(padded_text)>>> encrypted_textb>üx‡²“üHÕ9VQ
This code is a little confusing, so let’s spend some time breaking it down. First off, it should be noted that the key size for DES encryption is 8 bytes, which is why we set our key variable to a size letter string. The string that we will be encrypting must be a multiple of 8 in length, so we create a function called pad that can pad any string out with spaces until it’s a multiple of 8. Next we create an instance of DES and some text that we want to encrypt. We also create a padded version of the text. Just for fun, we attempt to encrypt the original unpadded variant of the string which raises a ValueError. Here we learn that we need that padded string after all, so we pass that one in instead. As you can see, we now have an encrypted string!
Of course, the example wouldn’t be complete if we didn’t know how to decrypt our string:
>>> des.decrypt(encrypted_text)bPython rocks!
Fortunately, that is very easy to accomplish as all we need to do is call the **decrypt** method on our des object to get our decrypted byte string back. Our next task is to learn how to encrypt and decrypt a file with PyCrypto using RSA. But first we need to create some RSA keys!

Create an RSA Key

If you want to encrypt your data with RSA, then you’ll need to either have access to a public / private RSA key pair or you will need to generate your own. For this example, we will just generate our own. Since it’s fairly easy to do, we will do it in Python’s interpreter:
>>> from Crypto.PublicKey import RSA>>> code = nooneknows>>> key = RSA.generate(2048)>>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, protection="scryptAndAES128-CBC")>>> with open(/path_to_private_key/my_private_rsa_key.bin, wb) as f: f.write(encrypted_key)>>> with open(/path_to_public_key/my_rsa_public.pem, wb) as f: f.write(key.publickey().exportKey())
First, we import RSA from Crypto.PublicKey. Then we create a silly passcode. Next we generate an RSA key of 2048 bits. Now we get to the good stuff. To generate a private key, we need to call our RSA key instance’s exportKey method and give it our passcode, which PKCS standard to use and which encryption scheme to use to protect our private key. Then we write the file out to disk.
Next, we create our public key via our RSA key instance’s publickey method. We used a shortcut in this piece of code by just chaining the call to exportKey with the publickey method call to write it to disk as well.

Encrypting a File

Now that we have both a private and a public key, we can encrypt some data and write it to a file. Here’s a pretty standard example:
from Crypto.PublicKey import RSAfrom Crypto.Random import get_random_bytesfrom Crypto.Cipher import AES, PKCS1_OAEPwith open(/path/to/encrypted_data.bin, wb) as out_file: recipient_key = RSA.import_key( open(/path_to_public_key/my_rsa_public.pem).read()) session_key = get_random_bytes(16) cipher_rsa = PKCS1_OAEP.new(recipient_key) out_file.write(cipher_rsa.encrypt(session_key)) cipher_aes = AES.new(session_key, AES.MODE_EAX) data = bblah blah blah Python blah blah ciphertext, tag = cipher_aes.encrypt_and_digest(data) out_file.write(cipher_aes.nonce) out_file.write(tag) out_file.write(ciphertext)
The first three lines cover our imports from PyCryptodome. Next we open up a file to write to. Then we import our public key into a variable and create a 16-byte session key. For this example we are going to be using a hybrid encryption method, so we use PKCS#1 OAEP, which is Optimal asymmetric encryption padding. This allows us to write a data of an arbitrary length to the file. Then we create our AES cipher, create some data and encrypt the data. This will return the encrypted text and the MAC. Finally we write out the nonce, MAC (or tag) and the encrypted text.
As an aside, a nonce is an arbitrary number that is only used for crytographic communication. They are usually random or pseudorandom numbers. For AES, it must be at least 16 bytes in length. Feel free to try opening the encrypted file in your favorite text editor. You should just see gibberish.
Now let’s learn how to decrypt our data:
from Crypto.PublicKey import RSAfrom Crypto.Cipher import AES, PKCS1_OAEPcode = nooneknowswith open(/path/to/encrypted_data.bin, rb) as fobj: private_key = RSA.import_key( open(/path_to_private_key/my_rsa_key.pem).read(), passphrase=code) enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) for x in (private_key.size_in_bytes(), 16, 16, -1) ] cipher_rsa = PKCS1_OAEP.new(private_key) session_key = cipher_rsa.decrypt(enc_session_key) cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) data = cipher_aes.decrypt_and_verify(ciphertext, tag)print(data)
If you followed the previous example, this code should be pretty easy to parse. In this case, we are opening our encrypted file for reading in binary mode. Then we import our private key. Note that when you import the private key, you must give it your passcode. Otherwise you will get an error. Next we read in our file. You will note that we read in the private key first, then the next 16 bytes for the nonce, which is followed by the next 16 bytes which is the tag and finally the rest of the file, which is our data.
Then we need to decrypt our session key, recreate our AES key and decrypt the data.
You can use PyCryptodome to do much, much more. However we need to move on and see what else we can use for our cryptographic needs in Python.

The Cryptography Package

The cryptography package aims to be “cryptography for humans” much like the requests library is “HTTP for Humans”. The idea is that you will be able to create simple cryptographic recipes that are safe and easy-to-use. If you need to, you can drop down to low=level cryptographic primitives, which require you to know what you’re doing or you might end up creating something that’s not very secure.
If you are using Python 3.5, you can install it with pip, like so:

 pip install cryptography
 
You will see that cryptography installs a few dependencies along with itself. Assuming that they all completed successfully, we can try encrypting some text. Let’s give the Fernet symmetric encryption algorithm. The Fernet algorithm guarantees that any message you encrypt with it cannot be manipulated or read without the key you define. Fernet also support key rotation via MultiFernet. Let’s take a look at a simple example:
>>> from cryptography.fernet import Fernet>>> cipher_key = Fernet.generate_key()>>> cipher_keybAPM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=>>> cipher = Fernet(cipher_key)>>> text = bMy super secret message>>> encrypted_text = cipher.encrypt(text)>>> encrypted_text(bgAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f b6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=)>>> decrypted_text = cipher.decrypt(encrypted_text)>>> decrypted_textbMy super secret message
First off we need to import Fernet. Next we generate a key. We print out the key to see what it looks like. As you can see, it’s a random byte string. If you want, you can try running the generate_key method a few times. The result will always be different. Next we create our Fernet cipher instance using our key.
Now we have a cipher we can use to encrypt and decrypt our message. The next step is to create a message worth encrypting and then encrypt it using the encrypt method. I went ahead and printed our the encrypted text so you can see that you can no longer read the text. To decrypt our super secret message, we just call decrypt on our cipher and pass it the encrypted text. The result is we get a plain text byte string of our message.

Wrapping Up

This chapter barely scratched the surface of what you can do with PyCryptodome and the cryptography packages. However it does give you a decent overview of what can be done with Python in regards to encrypting and decrypting strings and files. Be sure to read the documentation and start experimenting to see what else you can do!

Related Reading

Related Refcard:

Unreal Engine 4 Supports Googles New Daydream VR Platform






by Sarang Nagmote



Category - Mobile Apps Development
More Information & Updates Available at: http://insightanalytics.co.in




Over the past 24 hours the internet has been engulfed with talk about Googles new mobile VR platform, Daydream, which brings Android VR Mode—high-performance mobile VR features built on top of Android N—to a slew of new phones coming this fall.
Daydream places new demands on mobile VR, including higher resolution and improved graphics performance.
Thats music to our ears, as people who thrive on pushing the limits of emerging (and established) platforms.

FOSTERING NEW MOBILE VR TECHNOLOGIES AND HARDWARE

One way we help VR developers is through releasing internal projects like Couch Knights, Showdown and the VR Editor for free.
We also continue to optimize and show our Bullet Train demo, which is known for its polished execution of motion control gameplay.
Natural input that is made for VR is hugely important in making you believe you really have been transported to another place, and nothing beats the immersive qualities of incorporating motion controls into a VR experience.
This brings us back around to Googles new platform. 
In addition to its technical wins, Daydream introduces a new headset that comes with an intuitive wireless motion controller, which ships with every unit.
The Daydream controller is a major advancement in terms of how you can craft high-quality gameplay mechanics and exploration for VR on a mobile device.


Its great news for folks like Epic Games principal artist, Shane Caudle, who wants to make games knowing that the input experiences he designs will be consistent, responsive and enjoyable.
Take a look below to see a small-scope dungeon RPG that Shane built for Daydream in a couple of weeks using Unreal Engine 4 Blueprint visual scripting and simple fantasy-themed assets.

UNREAL ENGINE 4 SUPPORT FOR DAYDREAM IS AVAILABLE NOW

Moments ago, Epic Games CTO Kim Libreri announced on the Google I/O stage that we have brought support for Daydream to Unreal Engine 4.
Big thanks to our partners and friends at Hardsuit Labs who helped make the plugin a reality! We couldnt have done it without them.
Our Daydream integration is available now on GitHub (login required), and its coming to the binary tools in the full Unreal Engine 4.12 launch, which is on track for release on June 1, 2016.
Heres what Kim had to say:
"At Epic our mission is to give developers the very best tools for building immersive and visually impressive experiences with great performance on every platform we support. We have been creating VR experiences for many years now that not only push visual fidelity but what’s possible in terms of input, interaction, characters and gameplay mechanics
"Every project we work on extends the capabilities of Unreal Engine, and we pass these improvements to developers on a daily basis. Today we’re proud to be part of this new chapter in VR history."
As Kim noted, Daydream empowers you to create rich, deeply interactive content for mobile VR, and we have only begun to scratch the surface of what is possible.
Epic Games Technical Director of VR and AR Nick Whiting is also on the ground in Mountain View.
As Nick noted in an interview, the sample game that Shane built for Daydreams reveal helps to establish a graphical bar in terms of what can be achieved at the moment, how far we can push the hardware, and how to leverage the controllers versatile touchpad and buttons.


 Inventory UI and interaction in our Daydream dungeon VR project

Ready to start building for Daydream? Check out our documentation, and visit the Google VR hub to learn how to set up a Daydream development kit.
Want to learn more right away? Join us today at 11AM PDT / 2PM EDT / 7PM BST on Twitch.tv/UnrealEngine.
Nick will be making a guest appearance from Google I/O. If you miss it, dont worry - we always post livestream archives to YouTube.com/UnrealEngine.
We can’t wait to see the awesome content that you are going to make with Daydream and Unreal Engine 4!