Wednesday, 25 May 2016

Unconventional Logging: Game Development and Uncovering User Behaviour






by Sarang Nagmote



Category - Website Development
More Information & Updates Available at: http://insightanalytics.co.in




Not long ago, I was given the chance to speak with Loggly’s Sven Dummer about the importance of logging for game development. However, I got more a lot more than just that… Sven actually gave me a comprehensive tour of Loggly via screenshare, telling me a bit about the basics of logging—its purpose and how it’s done—and what particular tools Loggly offers up to make it easier for those trying to sort through and make sense of the endless haystack of data that logs serve up. And after my crash course in logging and Loggly, Sven did indeed deliver a special use-case for logging that was particular to game development, though I think it can be applied creatively elsewhere, as well.
First off, let me recap a bit of information that Sven provided about logging and log management. If you want to skip ahead to the game dev use-case, feel free.

Crash Course: My Experience With Logging and Loggly

Upon sharing his screen with me, Sven first took me to the command line of his Mac to illustrate just how much information logging generates. He entered a command revealing a list of all the processes currently happening on his laptop and, as you’ve probably already guessed, there was a lot of information to show. Data spat out onto the page in chunks, and I quickly became overwhelmed by the velocity and disorganization of words and numbers perpetually scrolling onto the screen. This information—some of it obviously very useful to those that know what they’re looking for—was delivered piece by piece very quickly. The format of the data was “pretty cryptic to people like you and me” as Sven put it, but what we were looking at was comparatively easy compared to the data formats of some logs.
And that’s just it, there is no standard format for log data. It can come in a variety of file types and is displayed differently depending on the type. In Sven’s words:
“Every application or component typically writes its own log data in its own log file, and there is no one standardized format, so these log files can look very different. So, if you want to make sense of the data, you have to be somewhat familiar with the formats.”
And continuing on to explain how this can become even more difficult to manage when pulling data from a multitude of sources, Sven gave this example:
“Let’s imagine you’re running a large complex web application… you’re in a business that’s running a webstore. In that case, you might have a very complicated setup with a couple of databases, a web server, a Java application doing some of your business logic—so you have multiple servers with multiple components, which all do something that basically makes up your application. And so, if something goes wrong, then the best way to trace things down is in the log data. But you have all these different components generating different log files in different formats. And, if your application is somewhat complex and you have 25 different servers, they all write the log data locally to the hard drive so you can imagine that troubleshooting that can become quite difficult.”
He continued on to explain how a log management tool like Loggly can gather together these many different logs (it supports parsing of many different formats out of the box) and display them in a unified format—not only to make the information more presentable, but also to securely provide access to an entire team:
“Let’s say you have an operations team and these folks are tasked with making sure that your system is up and running. If they were supposed to look at the log data on all these individual servers and on all these components, they would have to know how to: 1. log into those servers, 2. be able to reach the servers inside your network, 3. have credentials there, 4. know where the log files reside; and then, they would still be looking at all of these individual log files without getting the big picture.
However, instead, they could send all of their logs to Loggly, locating them in one place. This not only allows for a cohesive picture of all the logs that make up a web application [for example], but it also removes the need for everyone on your operations team to log into every single server in order to get access to the logs, which is important from a security perspective. Because, you don’t necessarily want everybody to be able to have administrative privileges to all these different servers.”
At this point, Sven launched Loggly and it was a complete sea-change. Rather than a black terminal window overflowing with indistinguishable blobs of text, the interface proved to be much more organized and user-friendly. With Loggly, Sven showed me how to search for particular logs, filter out unwanted messages, drill down to a specific event and grab surrounding logs, and display information in one standardized flow, so that it was much easier for the viewer to sort, scan, and find what he/she is after. He also pointed out how one might automate the system to track and deliver specific error messages (or other information) to corresponding team members who that information is best suited for. Through Loggly’s available integrations, one might have this information delivered via a specific medium, like Slack or Hipchat, so that team members receive these notifications in real time and can act in the moment. Honestly, there were so many features available that Sven showed to me, I don’t think I can cover them all in this post—if you want to see more about the features, take a look around this page for a while and explore the tutorial section.
Image title
Loggly integrated with Hipchat.

One thing I remember saying to Sven is that the command line view of logs looks a bit like the familiar green-tinged code lines that spastically scatter across the monitors in The Matrix, endlessly ticking out information on and on and on… and on. He was quick to point out that Loggly still provided a command line-esque views of logs via their Live Tail feature, but with more control. I highly recommend checking it out.
Image title
What logs look like to the untrained eye.
Image title
Loggly Live Tail in action... running in an OS X Terminal and on Windows PowerShell.

The Importance of Logging for Game Development

So, typically one might use logging to discover performance bottlenecks, disruptions in a system, or various other disturbances which can be improved after some careful detective work. However, when looked at through another lens, logs can offer up much more interesting information, revealing user behavior patterns that can be analyzed to improve a game’s design and creative direction. Let me take you through a couple of examples that Sven offered up which shed light on how one might use logs to uncover interesting conclusions.
Sven launched a Star Fox-esque flying game in his browser (a game written in Unity, a format which Loggly supports out of the box) and began guiding his spaceship through rings that were floating in the air. It was pretty basic, the point being to make it through each ring without crashing into the edges.
Image title
This is an image of Logglys demo game... not Star Fox!

While flying, he opened up Live Tail and showed me the logs coming in at near real-time (he explained there was a very small network delay). Back in the game, he began switching camera angles, and I could see a corresponding log event every time he triggered the command. This is where it gets interesting…
“The camera changes are being recorded in the log and I can see them here. Now, this is interesting because it will also tell me from which IP address they’re coming. And this is just a very simple demo game, but I could also log the ID of the user for example and many more things to create a user behaviour profile. And, that is very interesting because it helps me to improve my game based on what users do.
For example, let’s say I find out that users always change the camera angle right before they fly through ring five, and perhaps a lot of users fail when they fly through this ring. And, maybe that’s a source of customer frustration… maybe they bounce the site when they reach that point. And maybe then, I realize that people are changing the camera because there’s a cloud in the way that blocks the view of ring five. Then I can tell my design team you know, maybe we should take the cloud out at this point. Or, we can tell the creative team to redesign the cloud and make it transparent. So, now we’re getting into a completely different area other than just IT operations here. When you can track user behavior you can use it to improve things like visuals and design in the game.”
Image title
Logs gathered from the change in camera angles within the demo game.

I found this idea fascinating, and Sven continued on describing a conversation he had with unnamed Mobile game publisher who recently used Loggly in a similar way…
“We showed this demo at GDC and there was actually somebody who visited our booth who I had a very interesting conversation with on this topic. Basically, they told me that they develop mobile games for smart phones and had plans to introduce a new character in one of their games for gamers to interact with. Their creative team had come up with 6 or 7 different visuals for characters, so their idea was to do simple A/B testing and find out which of these characters resonated best with their users through gathering and studying their logs in Loggly. Then they planned on scrapping the character models that didn’t do well.
However, when they did their AB testing they got a result that they were not at all prepared for. There was no distinctive winner, but the regional differences were huge. So, people in Europe vs. Asia vs. America—there was no one winner, but rather there were clear winners by region. So, that was something that they were not expecting nor prepared for. And they told me that they actually reprioritized their road map and the work that they had planned for their development team and decided to redesign the architecture of their game so that it would support serving different players different characters based on the region that they were in. They realized, they could be significantly more successful and have a bigger gaming audience if they designed it this way.”
Once again, I found this extremely interesting. The idea of using logs creatively to uncover user behavior was something completely novel to me. But apparently, user behavior is not the only type of behavior that can be uncovered.
“Just recently there was an article on Search Engine Land where an SEO expert explained how to answer some major questions about search engine optimization by using Loggly and log data. Again, a completely different area where someone is analyzing not user behavior but, in this case, search engine robots—when do they come to the website, are they blocked by something, do I see activity on webpages where I don’t want these search robots to be active? So, I think you get the idea, really these logs can be gathered and used for any sort of analysis.”
And, that’s basically the idea. By pulling in such vast amounts of data, each offering its own clues as to what is going on in an application, log management tools like Loggly act as kind of magnifying glass for uncovering meaningful conclusions. And what does meaningful mean? Well, it’s not limited to performance bottlenecks and operational business concerns, but can actually provide genuine insights into user behavior or creative decision-making based on analytics.

Swiftenv: Swift Version Manager






by Sarang Nagmote



Category - Mobile Apps Development
More Information & Updates Available at: http://insightanalytics.co.in




Swift 3 development is so fast at the moment, that a new development snapshot is coming out every couple of weeks. To manage this, Kyle Fuller has rather helpfully written swiftenv which works on both OS X and Linux.
Once installed, usage is really simple. To install a new snapshot:

swiftenv install {version}

Where {version} is something like: DEVELOPMENT-SNAPSHOT-2016-05-09-a, though you can also use the full URL from the swift.org download page.
The really useful feature of swiftenv is that you can set the swift version on per-project basis. As change is so fast, projects are usually a version or so behind. e.g. at the time of writing, Kituras current release (0.12.0) works withDEVELOPMENT-SNAPSHOT-2016-04-25-a.
We register a project specific Swift version using:

swiftenv local {version}
i.e. For Kitura 0.12: swiftenv local DEVELOPMENT-SNAPSHOT-2016-04-25-a
Nice and easy!

Effective and Faster Debugging With Conditional Breakpoints






by Sarang Nagmote



Category - Developer
More Information & Updates Available at: http://insightanalytics.co.in




In order to find and resolve defects that prevent correct operation of our code, we mostly use debugging process. Through this process, in a sense, we "tease out" the code and observe the values of variables in runtime. Sometimes, this life-saving process can be time consuming.
Today, most IDEs and even browsers make debugging possible. With the effective use of these tools we can make the debugging process faster and easier.
Below, I want to share some methods that help us make the debugging process fast and effective. You will see Eclipse IDE and Chrome browser samples, but you can implement these methods to other IDEs and browsers, too.
In order to debug our Java code in Eclipse, we put a breakpoint to the line which we want to observe:
Image title
When we run our code in debug mode, the execution of code suspends in every iteration of line which we put breakpoint on. We can also observe the instant values of variables, when the exceution suspends.
Image title
When we know the reason of a defect, instead of observing the instant values of variables in every iteration, we can just specify the condition in the properties of breakpoint. This makes the execution suspended only when the condition  is met. By this way, we can observe the condition we expected quickly:
Image title
Image title
Image title
With the help of this property, we can even run any code when the execution passes the breakpoint, without suspending the execution.
Image title
We can also, change the instant value of variables when the execution passes the breakpoint. So, we can prevent the case which makes a code throw exception.
Image title
With the help of this property, we can also throw any exception from the breakpoint. By this way, it is possible to observe the handling of a rare exception.
Image title
It is possible to debug this in Chrome, too. This time, we will debug our Javascript code. To do this, we can press F12 or open the "Sources" menu under Tools>Developer Tools menu and select the code which we want to debug and add breakpoint. After that, we can also specify a condition to suspend the execution only when the condition meets.
Image title

Fixing MySQL Scalability Problems With ProxySQL or Thread Pool






by Sarang Nagmote



Category - Databases
More Information & Updates Available at: http://insightanalytics.co.in




In this blog post, we’ll discuss fixing MySQL scalability problems using either ProxySQL or thread pool.
In the previous post, I showed that even MySQL 5.7 in read-write workloads is not able to maintain throughput. Oracle’s recommendation to play black magic with innodb_thread_concurrency and innodb_spin_wait_delay doesn’t always help. We need a different solution to deal with this scaling problem.
All the conditions are the same as in my previous run, but I will use:
  • ProxySQL limited to 200 connections to MySQL. ProxySQL has a capability to multiplex incoming connections; with this setting, even with 1000 connections to the proxy, it will maintain only 200 connections to MySQL.
  • Percona Server with enabled thread pool, and a thread pool size of 64
You can see final results here:
Fixing MySQL scalability problems
There are good and bad sides for both solutions. With ProxySQL, there is a visible overhead on lower numbers of threads, but it keeps very stable throughput after 200 threads.
With Percona Server thread pool, there is little-to-no overhead if the number of threads is less than thread pool size, but after 200 threads it falls behind ProxySQL.
Here is the chart with response times:
I would say the correct solution depends on your setup:
  • If you already use or plan to use ProxySQL, you may use it to prevent MySQL from saturation
  • If you use Percona Server, you might consider trying to adjust the thread pool

Summary

Advanced Metrics Visualization Dashboarding With Apache Ambari






by Sarang Nagmote



Category - Data Analysis
More Information & Updates Available at: http://insightanalytics.co.in




At Hortonworks, we work with hundreds of enterprises to ensure they get the most out of Apache Hadoop and the Hortonworks Data Platform. A critical part of making that possible is ensuring operators can quickly identify the root cause if something goes wrong.
A few weeks ago, we presented our vision for Streamlining Apache Hadoop Operations, and today we are pleased to announce the availability of Apache Ambari 2.2.2 which delivers on the first phase of this journey. With this latest update to Ambari, we can now put the most critical operational metrics in the hands of operators allowing them to:
  • Gain a better understanding of cluster health and performance metrics through advanced visualizations and pre-built dashboards.
  • Isolate critical metrics for core cluster services such as HDFS, YARN, and HBase.
  • Reduce time to troubleshoot problems, and improve the level of service for cluster tenants.

Advanced Metrics Visualization and Dashboarding

Apache Hadoop components produce a lot of metric data, and the Ambari Metrics System (introduced about a year ago as part of Ambari 2.0) provides a scalable low-latency storage system for capturing those metrics.  Understanding which metrics to look at and why takes experience and knowledge.  To help simplify this process, and be more prescriptive in choosing the right metrics to review when problems arise, Grafana (a leading graph and dashboard builder for visualizing time-series metrics) is now included with Ambari Metrics. The integration of Grafana with Ambari brings the most important metrics front-and-center. As we continue to streamline the operational experiences of HDP, operational metrics play a key role, and Grafana provides a unique value to our operators.

How It Works

Grafana is deployed, managed and pre-configured to work with the Ambari Metrics service. We are including a curated set dashboards for core HDP components, giving operators at-a-glance views of the same metrics Hortonworks Support & Engineering review when helping customers troubleshoot complex issues.
Metrics displayed on each dashboard can be filtered by time, component, and contextual information (YARN queues for example) to provide greater flexibility, granularity and context.
Ambari_Grafana

Download Ambari and Get Started

We look forward to your feedback on this phase of our journey and encourage you to visit the Hortonworks Documentation site for information on how to download and get started with Ambari. Stay tuned for updates as we continue on the journey to streamline Apache Hadoop operations.

Tuesday, 24 May 2016

An Intro to Encryption in Python 3






by Sarang Nagmote



Category - Website Development
More Information & Updates Available at: http://insightanalytics.co.in




Python 3 doesn’t have very much in its standard library that deals with encryption. Instead, you get hashing libraries. We’ll take a brief look at those in the chapter, but the primary focus will be on the following 3rd party packages: PyCrypto and cryptography. We will learn how to encrypt and decrypt strings with both of these libraries.

Hashing

If you need secure hashes or message digest algorithms, then Python’s standard library has you covered in the hashlib module. It includes the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 as well as RSA’s MD5 algorithm. Python also supports the adler32 and crc32 hash functions, but those are in the zlib module.
One of the most popular uses of hashes is storing the hash of a password instead of the password itself. Of course, the hash has to be a good one or it can be decrypted. Another popular use case for hashes is to hash a file and then send the file and its hash separately. Then the person receiving the file can run a hash on the file to see if it matches the hash that was sent. If it does, then that means no one has changed the file in transit.
Let’s try creating an md5 hash:
>>> import hashlib>>> md5 = hashlib.md5()>>> md5.update(Python rocks!)Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> md5.update(Python rocks!)TypeError: Unicode-objects must be encoded before hashing>>> md5.update(bPython rocks!)>>> md5.digest()b‚ì#döN}*+[ôw
Let’s take a moment to break this down a bit. First off, we import hashlib and then we create an instance of an md5 HASH object. Next, we add some text to the hash object and we get a traceback. It turns out that to use the md5 hash, you have to pass it a byte string instead of a regular string. So we try that and then call it’s digest method to get our hash. If you prefer the hex digest, we can do that too:
>>> md5.hexdigest()1482ec1b2364f64e7d162a2b5b16f477
There’s actually a shortcut method of creating a hash, so we’ll look at that next when we create our sha512 hash:
>>> sha = hashlib.sha1(bHello Python).hexdigest()>>> sha422fbfbc67fe17c86642c5eaaa48f8b670cbed1b
As you can see, we can create our hash instance and call its digest method at the same time. Then we print out the hash to see what it is. I chose to use the sha1 hash as it has a nice short hash that will fit the page better. But it’s also less secure, so feel free to try one of the others.

Key Derivation

Python has pretty limited support for key derivation built into the standard library. In fact, the only method that hashlib provides is the pbkdf2_hmac method, which is the PKCS#5 password-based key derivation function 2. It uses HMAC as its psuedorandom function. You might use something like this for hashing your password as it supports a salt and iterations. For example, if you were to use SHA-256 you would need a salt of at least 16 bytes and a minimum of 100,000 iterations.
As a quick aside, a salt is just random data that you use as additional input into your hash to make it harder to “unhash” your password. Basically it protects your password from dictionary attacks and pre-computed rainbow tables.
Let’s look at a simple example:
>>> import binascii>>> dk = hashlib.pbkdf2_hmac(hash_name=sha256, password=bbad_password34, salt=bbad_salt, iterations=100000)>>> binascii.hexlify(dk)b6e97bad21f6200f9087036a71e7ca9fa01a59e1d697f7e0284cd7f9b897d7c02
Here we create a SHA256 hash on a password using a lousy salt but with 100,000 iterations. Of course, SHA is not actually recommended for creating keys of passwords. Instead you should use something like scrypt instead. Another good option would be the 3rd party package, bcrypt. It is designed specifically with password hashing in mind.

PyCryptodome

The PyCrypto package is probably the most well known 3rd party cryptography package for Python. Sadly PyCrypto’s development stopping in 2012. Others have continued to release the latest version of PyCryto so you can still get it for Python 3.5 if you don’t mind using a 3rd party’s binary. For example, I found some binary Python 3.5 wheels for PyCrypto on Github (https://github.com/sfbahr/PyCrypto-Wheels).
Fortunately, there is a fork of the project called PyCrytodome that is a drop-in replacement for PyCrypto. To install it for Linux, you can use the following pip command:

 pip install pycryptodome
 
Windows is a bit different:

 pip install pycryptodomex
 
If you run into issues, it’s probably because you don’t have the right dependencies installed or you need a compiler for Windows. Check out the PyCryptodome website for additional installation help or to contact support.
Also worth noting is that PyCryptodome has many enhancements over the last version of PyCrypto. It is well worth your time to visit their home page and see what new features exist.

Encrypting a String

Once you’re done checking their website out, we can move on to some examples. For our first trick, we’ll use DES to encrypt a string:
>>> from Crypto.Cipher import DES>>> key = abcdefgh>>> def pad(text): while len(text) % 8 != 0: text += return text>>> des = DES.new(key, DES.MODE_ECB)>>> text = Python rocks!>>> padded_text = pad(text)>>> encrypted_text = des.encrypt(text)Traceback (most recent call last): File "<pyshell#35>", line 1, in <module> encrypted_text = des.encrypt(text) File "C:ProgramsPythonPython35-32libsite-packagesCryptoCipherlockalgo.py", line 244, in encrypt return self._cipher.encrypt(plaintext)ValueError: Input strings must be a multiple of 8 in length>>> encrypted_text = des.encrypt(padded_text)>>> encrypted_textb>üx‡²“üHÕ9VQ
This code is a little confusing, so let’s spend some time breaking it down. First off, it should be noted that the key size for DES encryption is 8 bytes, which is why we set our key variable to a size letter string. The string that we will be encrypting must be a multiple of 8 in length, so we create a function called pad that can pad any string out with spaces until it’s a multiple of 8. Next we create an instance of DES and some text that we want to encrypt. We also create a padded version of the text. Just for fun, we attempt to encrypt the original unpadded variant of the string which raises a ValueError. Here we learn that we need that padded string after all, so we pass that one in instead. As you can see, we now have an encrypted string!
Of course, the example wouldn’t be complete if we didn’t know how to decrypt our string:
>>> des.decrypt(encrypted_text)bPython rocks!
Fortunately, that is very easy to accomplish as all we need to do is call the **decrypt** method on our des object to get our decrypted byte string back. Our next task is to learn how to encrypt and decrypt a file with PyCrypto using RSA. But first we need to create some RSA keys!

Create an RSA Key

If you want to encrypt your data with RSA, then you’ll need to either have access to a public / private RSA key pair or you will need to generate your own. For this example, we will just generate our own. Since it’s fairly easy to do, we will do it in Python’s interpreter:
>>> from Crypto.PublicKey import RSA>>> code = nooneknows>>> key = RSA.generate(2048)>>> encrypted_key = key.exportKey(passphrase=code, pkcs=8, protection="scryptAndAES128-CBC")>>> with open(/path_to_private_key/my_private_rsa_key.bin, wb) as f: f.write(encrypted_key)>>> with open(/path_to_public_key/my_rsa_public.pem, wb) as f: f.write(key.publickey().exportKey())
First, we import RSA from Crypto.PublicKey. Then we create a silly passcode. Next we generate an RSA key of 2048 bits. Now we get to the good stuff. To generate a private key, we need to call our RSA key instance’s exportKey method and give it our passcode, which PKCS standard to use and which encryption scheme to use to protect our private key. Then we write the file out to disk.
Next, we create our public key via our RSA key instance’s publickey method. We used a shortcut in this piece of code by just chaining the call to exportKey with the publickey method call to write it to disk as well.

Encrypting a File

Now that we have both a private and a public key, we can encrypt some data and write it to a file. Here’s a pretty standard example:
from Crypto.PublicKey import RSAfrom Crypto.Random import get_random_bytesfrom Crypto.Cipher import AES, PKCS1_OAEPwith open(/path/to/encrypted_data.bin, wb) as out_file: recipient_key = RSA.import_key( open(/path_to_public_key/my_rsa_public.pem).read()) session_key = get_random_bytes(16) cipher_rsa = PKCS1_OAEP.new(recipient_key) out_file.write(cipher_rsa.encrypt(session_key)) cipher_aes = AES.new(session_key, AES.MODE_EAX) data = bblah blah blah Python blah blah ciphertext, tag = cipher_aes.encrypt_and_digest(data) out_file.write(cipher_aes.nonce) out_file.write(tag) out_file.write(ciphertext)
The first three lines cover our imports from PyCryptodome. Next we open up a file to write to. Then we import our public key into a variable and create a 16-byte session key. For this example we are going to be using a hybrid encryption method, so we use PKCS#1 OAEP, which is Optimal asymmetric encryption padding. This allows us to write a data of an arbitrary length to the file. Then we create our AES cipher, create some data and encrypt the data. This will return the encrypted text and the MAC. Finally we write out the nonce, MAC (or tag) and the encrypted text.
As an aside, a nonce is an arbitrary number that is only used for crytographic communication. They are usually random or pseudorandom numbers. For AES, it must be at least 16 bytes in length. Feel free to try opening the encrypted file in your favorite text editor. You should just see gibberish.
Now let’s learn how to decrypt our data:
from Crypto.PublicKey import RSAfrom Crypto.Cipher import AES, PKCS1_OAEPcode = nooneknowswith open(/path/to/encrypted_data.bin, rb) as fobj: private_key = RSA.import_key( open(/path_to_private_key/my_rsa_key.pem).read(), passphrase=code) enc_session_key, nonce, tag, ciphertext = [ fobj.read(x) for x in (private_key.size_in_bytes(), 16, 16, -1) ] cipher_rsa = PKCS1_OAEP.new(private_key) session_key = cipher_rsa.decrypt(enc_session_key) cipher_aes = AES.new(session_key, AES.MODE_EAX, nonce) data = cipher_aes.decrypt_and_verify(ciphertext, tag)print(data)
If you followed the previous example, this code should be pretty easy to parse. In this case, we are opening our encrypted file for reading in binary mode. Then we import our private key. Note that when you import the private key, you must give it your passcode. Otherwise you will get an error. Next we read in our file. You will note that we read in the private key first, then the next 16 bytes for the nonce, which is followed by the next 16 bytes which is the tag and finally the rest of the file, which is our data.
Then we need to decrypt our session key, recreate our AES key and decrypt the data.
You can use PyCryptodome to do much, much more. However we need to move on and see what else we can use for our cryptographic needs in Python.

The Cryptography Package

The cryptography package aims to be “cryptography for humans” much like the requests library is “HTTP for Humans”. The idea is that you will be able to create simple cryptographic recipes that are safe and easy-to-use. If you need to, you can drop down to low=level cryptographic primitives, which require you to know what you’re doing or you might end up creating something that’s not very secure.
If you are using Python 3.5, you can install it with pip, like so:

 pip install cryptography
 
You will see that cryptography installs a few dependencies along with itself. Assuming that they all completed successfully, we can try encrypting some text. Let’s give the Fernet symmetric encryption algorithm. The Fernet algorithm guarantees that any message you encrypt with it cannot be manipulated or read without the key you define. Fernet also support key rotation via MultiFernet. Let’s take a look at a simple example:
>>> from cryptography.fernet import Fernet>>> cipher_key = Fernet.generate_key()>>> cipher_keybAPM1JDVgT8WDGOWBgQv6EIhvxl4vDYvUnVdg-Vjdt0o=>>> cipher = Fernet(cipher_key)>>> text = bMy super secret message>>> encrypted_text = cipher.encrypt(text)>>> encrypted_text(bgAAAAABXOnV86aeUGADA6mTe9xEL92y_m0_TlC9vcqaF6NzHqRKkjEqh4d21PInEP3C9HuiUkS9f b6bdHsSlRiCNWbSkPuRd_62zfEv3eaZjJvLAm3omnya8=)>>> decrypted_text = cipher.decrypt(encrypted_text)>>> decrypted_textbMy super secret message
First off we need to import Fernet. Next we generate a key. We print out the key to see what it looks like. As you can see, it’s a random byte string. If you want, you can try running the generate_key method a few times. The result will always be different. Next we create our Fernet cipher instance using our key.
Now we have a cipher we can use to encrypt and decrypt our message. The next step is to create a message worth encrypting and then encrypt it using the encrypt method. I went ahead and printed our the encrypted text so you can see that you can no longer read the text. To decrypt our super secret message, we just call decrypt on our cipher and pass it the encrypted text. The result is we get a plain text byte string of our message.

Wrapping Up

This chapter barely scratched the surface of what you can do with PyCryptodome and the cryptography packages. However it does give you a decent overview of what can be done with Python in regards to encrypting and decrypting strings and files. Be sure to read the documentation and start experimenting to see what else you can do!

Related Reading

Related Refcard:

Unreal Engine 4 Supports Googles New Daydream VR Platform






by Sarang Nagmote



Category - Mobile Apps Development
More Information & Updates Available at: http://insightanalytics.co.in




Over the past 24 hours the internet has been engulfed with talk about Googles new mobile VR platform, Daydream, which brings Android VR Mode—high-performance mobile VR features built on top of Android N—to a slew of new phones coming this fall.
Daydream places new demands on mobile VR, including higher resolution and improved graphics performance.
Thats music to our ears, as people who thrive on pushing the limits of emerging (and established) platforms.

FOSTERING NEW MOBILE VR TECHNOLOGIES AND HARDWARE

One way we help VR developers is through releasing internal projects like Couch Knights, Showdown and the VR Editor for free.
We also continue to optimize and show our Bullet Train demo, which is known for its polished execution of motion control gameplay.
Natural input that is made for VR is hugely important in making you believe you really have been transported to another place, and nothing beats the immersive qualities of incorporating motion controls into a VR experience.
This brings us back around to Googles new platform. 
In addition to its technical wins, Daydream introduces a new headset that comes with an intuitive wireless motion controller, which ships with every unit.
The Daydream controller is a major advancement in terms of how you can craft high-quality gameplay mechanics and exploration for VR on a mobile device.


Its great news for folks like Epic Games principal artist, Shane Caudle, who wants to make games knowing that the input experiences he designs will be consistent, responsive and enjoyable.
Take a look below to see a small-scope dungeon RPG that Shane built for Daydream in a couple of weeks using Unreal Engine 4 Blueprint visual scripting and simple fantasy-themed assets.

UNREAL ENGINE 4 SUPPORT FOR DAYDREAM IS AVAILABLE NOW

Moments ago, Epic Games CTO Kim Libreri announced on the Google I/O stage that we have brought support for Daydream to Unreal Engine 4.
Big thanks to our partners and friends at Hardsuit Labs who helped make the plugin a reality! We couldnt have done it without them.
Our Daydream integration is available now on GitHub (login required), and its coming to the binary tools in the full Unreal Engine 4.12 launch, which is on track for release on June 1, 2016.
Heres what Kim had to say:
"At Epic our mission is to give developers the very best tools for building immersive and visually impressive experiences with great performance on every platform we support. We have been creating VR experiences for many years now that not only push visual fidelity but what’s possible in terms of input, interaction, characters and gameplay mechanics
"Every project we work on extends the capabilities of Unreal Engine, and we pass these improvements to developers on a daily basis. Today we’re proud to be part of this new chapter in VR history."
As Kim noted, Daydream empowers you to create rich, deeply interactive content for mobile VR, and we have only begun to scratch the surface of what is possible.
Epic Games Technical Director of VR and AR Nick Whiting is also on the ground in Mountain View.
As Nick noted in an interview, the sample game that Shane built for Daydreams reveal helps to establish a graphical bar in terms of what can be achieved at the moment, how far we can push the hardware, and how to leverage the controllers versatile touchpad and buttons.


 Inventory UI and interaction in our Daydream dungeon VR project

Ready to start building for Daydream? Check out our documentation, and visit the Google VR hub to learn how to set up a Daydream development kit.
Want to learn more right away? Join us today at 11AM PDT / 2PM EDT / 7PM BST on Twitch.tv/UnrealEngine.
Nick will be making a guest appearance from Google I/O. If you miss it, dont worry - we always post livestream archives to YouTube.com/UnrealEngine.
We can’t wait to see the awesome content that you are going to make with Daydream and Unreal Engine 4!

10 Considerations for Moving to Automated Testing






by Sarang Nagmote



Category - Developer
More Information & Updates Available at: http://insightanalytics.co.in




The following was contributed to DZone by Lubos Parobek, VP of Products, Sauce Labs.
In order to address the pressure around the need to quickly release quality software, organizations continue to adopt agile development techniques. Examples include continuous integration and continuous delivery (CI and CD). On the flipside, the traditional waterfall approach to building software is quickly becoming dated.                                             
In their groundbreaking book, Continuous Delivery, Jez Humble and David Farley define what a typical CI/CD deployment pipeline is, describing it as “an automated implementation of your application’s build, deploy, test, and release process.” They go on to recommend that as much of this pipeline should be automated as possible.
An integral part of a successful CI/CD pipeline is automated (versus manual) testing. The execution of pre-scripted tests on a web or mobile app saves considerable time, plus having test data accessible in detailed reports is valuable to development teams who can use this information to quickly identify issues. In short, automated testing is key to achieving a true CI/CD deployment.
But how do you know when to make the step towards automation? I recommend asking yourself the following ten questions:
  1. Is the test executed more than once?
  2. Is the test run on a regular basis (i.e. often reused, such as part of regression or build testing)?
  3. ls the test impossible or prohibitively expensive to perform manually, such as concurrency, soak/endurance testing, performance, and memory leak detection testing?
  4. Are there timing-critical components that are a must to automate?
  5. Does the test cover the most complex area (often the most error-prone area)?
  6. Does the test require many data combinations using the same test steps (i.e. multiple data inputs for the same feature)?
  7. Are the expected results constant (i.e. do not change or vary with each test)? Even if the results vary, is there a percentage tolerance that could be measured as expected results?
  8. ls the test very time-consuming, such as expected results analysis of hundreds of outputs?
  9. ls the test run on a stable application (i.e. the features of the application are not in constant flux)?
  10. Does the test need to be verified on multiple software and hardware configurations?
If you answered “yes” to most these questions, then automation is likely right for you. Naturally any task that is repeated often and is labor-intensive is often the best candidate for automation. The end result is the ability to get the most from your CI/CD workflows, and being able to ship higher quality software faster.
But despite its benefits, it’s important to note the cases where automated testing doesn’t make sense. I’ll call them out here: 
  • Usability tests, which are conducted to discover how easy it is for users to accomplish their goals with your software. There are several different approaches to usability testing, from contextual enquiry to sitting users down in front of your application and filming them performing common tasks. Since usability, consistency of look and feel, and so on are difficult things to verify in automated tests, they are ideal for manual testing.
  • One-off tests that are run only once or infrequently will not be high-payoff tests to automate and are better done manually. When one-off tests keep coming back after a point, it then makes sense to consider automating them too. A good practice is to set a threshold for how much one-off testing your organization is willing to keep manual.       
The right technical approach involves knowing which tests to automate and which to continue manually. Regardless, organizations that invest in automated testing efforts are poised to get the most out of their CI/CD workflows.
The result is shipping higher quality software faster, offering huge competitive advantage in an increasingly competitive business environment. In short, considering the move to automation is worth it by leaps and bounds.
About the Author
At Sauce Labs, Lubos Parobek leads strategy and development of Sauce Labs’ web and mobile application testing platform as VP of product. His previous experience includes leadership positions at organizations including Dell KACE, Sybase iAnywhere, AvantGo and 3Com. Parobek holds a Masters in Business Administration from the University of California, Berkeley.

A Procedure for the SLM Clustering Algorithm






by Sarang Nagmote



Category - Databases
More Information & Updates Available at: http://insightanalytics.co.in




In the middle of last year, I blogged about the Smart Local Moving algorithm which is used for community detection in networks and with the upcoming introduction of procedures in Neo4j I thought it’d be fun to make that code accessible as one.
If you want to grab the code and follow along it’s sitting on the SLM repository on my GitHub.
At the moment, the procedure is hardcoded to work with a KNOWS relationship between two nodes but that could easily be changed.
To check that it’s working correctly, I thought it’d make most sense to use the Karate Club data set described on the SLM home page. I think this data set is originally from Networks, Crowds, and Markets.
I wrote the following LOAD CSV script to create the graph in Neo4j:
LOAD CSV FROM "file:///Users/markneedham/projects/slm/karate_club_network.txt" as rowFIELDTERMINATOR " "MERGE (person1:Person {id: row[0]})MERGE (person2:Person {id: row[1]})MERGE (person1)-[:KNOWS]->(person2)

Graph

Next, we need to call the procedure which will add an appropriate label to each node depending which community it belongs to. This is what the procedure code looks like:
public class ClusterAllTheThings{ @Context public org.neo4j.graphdb.GraphDatabaseService db; @Procedure @PerformsWrites public Stream<Cluster> knows() throws IOException { String query = "MATCH (person1:Person)-[r:KNOWS]->(person2:Person) " + "RETURN person1.id AS p1, person2.id AS p2, toFloat(1) AS weight"; Result rows = db.execute( query ); ModularityOptimizer.ModularityFunction modularityFunction = ModularityOptimizer.ModularityFunction.Standard; Network network = Network.create( modularityFunction, rows ); double resolution = 1.0; int nRandomStarts = 1; int nIterations = 10; long randomSeed = 0; double modularity; Random random = new Random( randomSeed ); double resolution2 = modularityFunction.resolution( resolution, network ); Map<Integer, Node> cluster = new HashMap<>(); double maxModularity = Double.NEGATIVE_INFINITY; for ( int randomStart = 0; randomStart < nRandomStarts; randomStart++ ) { network.initSingletonClusters(); int iteration = 0; do { network.runSmartLocalMovingAlgorithm( resolution2, random ); iteration++; modularity = network.calcQualityFunction( resolution2 ); } while ( (iteration < nIterations) ); if ( modularity > maxModularity ) { network.orderClustersByNNodes(); cluster = network.getNodes(); maxModularity = modularity; } } for ( Map.Entry<Integer, Node> entry : cluster.entrySet() ) { Map<String, Object> params = new HashMap<>(); params.put("userId", String.valueOf(entry.getKey())); db.execute("MATCH (person:Person {id: {userId}}) " + "SET person:`" + (format( "Community-%d`", entry.getValue().getCluster() )), params); } return cluster .entrySet() .stream() .map( ( entry ) -> new Cluster( entry.getKey(), entry.getValue().getCluster() ) ); } public static class Cluster { public long id; public long clusterId; public Cluster( int id, int clusterId ) { this.id = id; this.clusterId = clusterId; } }}
I’ve hardcoded some parameters to use defaults which could be exposed through the procedure to allow more control if necessary. The Network#create function assumes it is going to receive a stream of rows containing columns ‘p1’, ‘p2’, and ‘weight’ to represent the ‘source’, ‘destination’, and ‘weight’ of the relationship between them.
We call the procedure like this:
CALL org.neo4j.slm.knows()
It will return each of the nodes and the cluster it’s been assigned to and if we then visualize the network in the Neo4j browser we’ll see this:

Graph  1

Which is similar to the visualisation from the SLM home page:
Image title
If you want to play around with the code feel free. You’ll need to run the following commands to create the JAR for the plugin and deploy it.
$ mvn clean package $ cp target/slm-1.0.jar /path/to/neo4j/plugins/ $ ./path/to/neo4j/bin/neo4j restart
And, you’ll need the latest milestone of Neo4j which has procedures enabled.

State of Global Wealth Management—Ripe for Technology Disruption






by Sarang Nagmote



Category - Data Analysis
More Information & Updates Available at: http://insightanalytics.co.in




“If (wealth management advisors) continue to work the way you have been, you may not be in business in five years” – Industry leader Joe Duran, 2015 TD Ameritrade Wealth Advisor Conference.
The wealth management segment is a potential high growth business for any financial institution. It is the highest customer touch segment of banking and is fostered on long term and extremely lucrative advisory relationships. It is also the ripest segment for disruption due to a clear shift in client preferences and expectations for their financial future. This three-part series explores the industry trends, business use cases mapped to technology and architecture and disruptive themes and strategies.

Introduction to Wealth Management 

There is no one universally accepted definition of wealth management as it broadly refers to an aggregation of financial services. These include financial advisory, personal investment management and planning disciplines directly for the benefit of high-net-worth (HNW) clients. But wealth management has also become a highly popular branding term that advisors of many different kinds increasingly adopt. Thus this term now refers to a wide range of possible functions and business models.
Trends related to shifting client demographics, evolving expectations from HNW clients regarding their needs (including driving social impact), technology and disruptive competition are converging. New challenges and paradigms are afoot in the wealth management space, but on the other side of the coin, so is a lot of opportunity.
A wealth manager is a specialized financial advisor who helps a client construct an entire investment portfolio and advises on how to prepare for present and future financial needs. The investment portion of wealth management normally entails both asset allocation of a whole portfolio as well as the selection of individual investments. The planning function of wealth management often incorporates tax planning around the investment portfolio as well as estate planning for individuals as well as family estates.
There is no trade certification for a wealth manager. Several titles are commonly used such as advisors, family office representatives, private bankers, etc. Most of these professionals are certified CFPs, CPAs and MBAs as well. Legal professionals are also sometimes seen augmenting their legal expertise with these certifications.

State of Global Wealth Management 

Private banking services are delivered to high net worth individuals (HNWI). These are the wealthiest clients that demand the highest levels of service and more customized product offerings than are provided to regular clients. Typically, wealth management is a subsidiary of a larger investment or retail banking conglomerate. Private banking also includes other services like estate planning and tax planning as we will see in a few paragraphs.
The World Wealth Report for 2015 was published jointly by Royal Bank of Scotland (RBS) and CapGemini. Notable highlights from the report include:
  1. Nearly 1 million people in the world attained millionaire status in 2014
  2. The collective investible assets of the world’s HNWI totaled $56 trillion
  3. By 2017, the total assets under management for global HNWIs will rise beyond $70 trillion
  4. Asia Pacific has the world’s highest number of millionaires with both India and China posting the highest rates of growth respectively
  5. Asia Pacific also led the world in the increase in HNWI assets at 8.5%. North America was a close second at 8.3%. Both regions surpassed their five year growth rates for high net worth wealth
  6. Equities were the most preferred investment vehicle for global HNWI with cash deposits, real estate and other alternative investments forming the rest
  7. The HNWI population is also highly credit friendly
Asia Pacific is gradually becoming the financial center of the world—a fact that has not gone unnoticed among the banking community. Thus banks need to as a general rule get more global and focused on non-traditional markets (North America and Western Europe).
The report also makes the point that despite the rich getting richer, global growth this year was more modest compered to previous years with a slowdown of 50% in the production of new HNWIs. This slower pace of growth now means that firms need to move to a more relationship centric model, specifically among highly coveted segment: younger investors. The report stresses that currently wealth managers are not able to serve the different needs of HNW clients under the age of 45 from both a mindset, business offering and technology capability standpoint.
The Broad Areas of Wealth Management

Areas Wealth Management SchematicThe Components of Wealth Management Businesses

As depicted above, full service wealth management firms broadly provide services in the following areas:
  • Investment Advisory
  • A wealth manager is a personal financial advisor who helps a client construct an investment portfolio that helps prepare for life changes based on their respective risk appetites and time horizons. The financial instruments invested in range from the mundane (equities, bonds, etc.) to the arcane (hedging derivatives, etc.)
  • Retirement Planning
  • Retirement planning is an obvious function of a client’s personal financial journey. From a HNWI standpoint, there is a need to provide complex retirement services while balancing taxes, income needs, estate prevention and the like.
  • Estate Planning Services
  • A key function of wealth management is to help clients pass on their assets via inheritance. Wealth managers help construct wills that leverage trusts and forms of insurance to help facilitate smooth inheritance.
  • Tax Planning
  • Wealth managers help clients manage their wealth in such a manner that tax impacts are reduced from a taxation perspective (e.g., the IRS in the US). As the pools of wealth increase, even small rates of taxation can have a magnified impact either way. The ability to achieve the right mix of investments from a tax perspective is a key capability.
  • Full Service Investment Banking
  • For sophisticated institutional clients, the ability to offer a raft of investment banking services is an extremely attractive capability.
  • Insurance Management
  • A wealth manager needs to be well versed in the kinds of insurance purchased by their HNWI clients so that the appropriate hedging services can be put in place.
  • Institutional Investments
  • Some wealth managers cater to institutional investors like pension funds and hedge funds and offer a variety of back office functions.
    It is to be noted that the wealth manager is not necessarily an expert in all of these areas but rather works well with the different areas of an investment firm from a planning, tax and legal perspective to ensure that their clients are able to accomplish the best outcomes.
    Client Preferences and Trends
    There are clear changing preferences on behalf of the HNWI, including:
    1. While older clients gave strong satisfaction scores to their existing wealth managers, the younger client’s needs are largely being missed by the wealth management community.
    2. Regulatory and cost pressures are rising leading to commodification of services.
    3. Innovative automation and usage techniques of data assets among new entrants (aka the FinTechs) are leading to the rise of “roboadvisor” services which have already begun disrupting existing players in a massive manner in certain HNWI segments.
    4. A need to offer holistic financial services tailored to the behavioral needs of the HNWI investors.

    Technology Trends

    The ability to sign up new accounts and offer them services spanning the above areas provides growth in the wealth management business. There has been a perception that wealth management as a sub sector has trailed other areas within banking from a technology and digitization standpoint. As with wider banking organizations, the wealth management business has been under significant pressure from the perspective of technology and the astounding pace of innovation seen over the last few years from a cloud, Big Data and open source standpoint. Here are a few trends to keep an eye on:
  • The Need for the Digitized Wealth Office
  • The younger HNWI clients (defined as under 45) use mobile technology as a way of interacting with their advisors. They demand a seamless experience across all of the above services using digital channels—a huge issue for established players as their core technology is still years behind providing a Web 2.0 like experience. The vast majority of applications are still separately managed with distinct user experiences ranging from client onboarding to servicing to transaction management. There is a crying need for IT infrastructure modernization ranging across the industry from cloud computing to Big Data to micro-services to agile cultures promoting techniques such as a DevOps approach.
  • The Need for Open and Smart Data Architecture
  • Siloed functions have led to siloed data architectures operating on custom built legacy applications. All of which inhibit the applications from using data in a manner that constantly and positively impacts the client experience. There is clearly a need to have an integrated digital experience both regionally and globally and to do more with existing data assets. Current players possess a huge first mover advantage as they offer highly established financial products across their large (and largely loyal and sticky) customer bases, a wide networks of physical locations, and rich troves of data that pertain to customer accounts and demographic information. However, it is not enough to just possess the data. They must be able to drive change through legacy thinking and infrastructures as things change around the entire industry as it struggles to adapt to a major new segment (millennial customers) who increasingly use mobile devices and demand more contextual services and a seamless and highly analytic-driven, unified banking experience—an experience akin to what consumers commonly experience via the Internet on web properties like Facebook, Amazon, Google, Yahoo and the like.
  • Demands for Increased Automation
  • The need to forge a closer banker/client experience is not just driving demand around data silos and streams themselves. It’s forcing players to move away from paper based models to a more seamless, digital and highly automated model to rework countless existing back and front office processes—the weakest link in the chain. While “Automation 1.0” focuses on digitizing processes, rules and workflow; “Automation 2.0” implies strong predictive modeling capabilities working at large scale—systems that constantly learn and optimize products & services based on client needs and preferences.
  • The Need to “Right-size” or Change Existing Business Models Based on Client Tastes and Feedback
  • The clear ongoing theme in the wealth management space is constant innovation. Firms need to ask themselves if they are offering the right products that cater to an increasingly affluent yet dynamic clientele.

    Conclusion

    The next post in this series will focus on the business lifecycle of wealth management. We will begin by describing granular use cases across the entire lifecycle from a business standpoint, and we’ll then examine the pivotal role of Big Data enabled architectures along with a new age reference architecture.
    In the third and final post in this series, we round off the discussion with an examination of strategic business recommendations for wealth management firms—recommendations which I will believe will drive immense business benefits by delivering innovative offerings and ultimately a superior customer experience.

    Tuesday, 17 May 2016

    What Is ASP.NET MVC? (And Other Beginner Questions)






    by Sarang Nagmote



    Category - Website Development
    More Information & Updates Available at: http://http://insightanalytics.co.in




    Ever since I started DanylkoWeb, Ive been focused on ASP.NET development specifically targeting MVC.
    As Ive mentioned before, MVC is quite a breath of fresh air to WebForm developers.
    However, Ive never discussed the basics of ASP.NET along with MVC. Someone new who visits my site would be quite thrown off by the advanced code I place in my posts. A while ago, I created a terminology guide, but never really answered reader questions about the whos, whats, and wheres of ASP.NET MVC.
    Today, I want to answer some relatively fundamental questions a few readers asked me in the past on how to get started, like what is MVC and other beginner questions.
    NOTE: This post is specifically meant for beginners interested in getting started with ASP.NET and its related technologies.

    What Is MVC?

    MVC is an acronym that stands for Model-View-Controller. Its a programming concept originally introduced in the 1970s by Trygve Reenskaug using Smalltalk-76. Its also apparent in some JavaScript frameworks like AngularJS, Aurelia, Ember, and Meteor.

    What Is ASP?

    ASP stands for Active Server Pages and is Microsofts web technology introduced in 1996. It was originally called Classic ASP and in 2002, it was transformed into ASP.NET.

    What Is .NET?

    .NET is a software framework written by Microsoft that runs on various platforms including Windows and Linux.

    What Is ASP.NET?

    ASP.NET is the successor to the Classic ASP technology (see previous) developed by Microsoft and is broken down into two types of web development: Web Forms (in 2002) and MVC (in 2008). If you want to create ASP.NET websites, its recommended that you use Microsofts Visual Studio to optimize your website development.

    What Is ASP.NET MVC?

    ASP.NET MVC is Microsofts vision of a Model-View-Controller (MVC) concept integrated into their ASP.NET technology to build modern websites.

    What Is ASPX?

    ASPX is a file extension that relates to web pages created using Microsofts Web Forms. An example would be MainPage.aspx. Its comprised of mostly HTML with optional CSS and JavaScript.

    What Is ASP.NET Web Forms?

    A proprietary technology developed by Microsoft to manage state and form data across multiple web pages. The innate ability of the Internet provides state-less pages. Microsoft created stateful pages by introducing the Web Forms method.
    Most Web Form pages have an ASPX file that contains the HTML along with a code-behind (.cs or .vb) file with the same name to attach code to that page.
    Based on our example above, our MainPage.aspx would also have a complimentary MainPage.aspx.cs file. In the Web Forms world, you cant have one without the other.
    For more information, head to Microsofts site for a complete explanation of What is Web Forms?

    What Is Code-behind?

    Code-behind is the C# (or VB.NET) code file that sits behind the ASPX file. Its specifically used with the Web Forms technology and compiles into an assembly or DLL.
    The idea is to provide code assisting in the functionality of your specific ASPX web pages.
    Its strictly a server-side language and is meant to process code for a particular page on the server and then serve the HTML to the browser.

    Whats the Difference Between a Website and a Web Application?

    A majority of Microsoft web developers are used to the idea of Web Applications because when published, they become compiled, meaning they run faster on the server.
    Websites, in Microsoft terms, are sites not compiled at design-time, but only when the page on IIS is requested making the response time slower to serve a page to a user. The next time the same page is requested, it reuses the compiled code.
    A Website also doesnt contain a project file to organize your web pages (either a .csproj or .vbproj) where Web Applications use project and solution files (.sln).
    For additional differences between a website and a web application, refer to Microsofts Web Application Projects vs Web Site Projects in Visual Studio.

    What Is IIS?

    IIS stands for Internet Information Server and is Microsofts web server to run ASP.NET web sites.
    You may also hear the term IIS Express which is a smaller version of IIS for developers to test their code locally in a browser before deploying the application. It ships with Visual Studio.

    How Many Versions of ASP.NET Are There?

    Currently, there are twelve versions.    
    • Version 1.0 (January 16, 2002)
    • Version 1.1 (April 24, 2003)
    • Version 2.0 (November 7, 2005)
    • Version 3.0 (November 6, 2006)
    • Version 3.5 (November 19, 2007)
    • Version 4.0 (April 12, 2010)
    • Version 4.5 (August 15, 2012)
    • Version 4.5.1 (October 17, 2013)
    • Version 4.5.2 (May 5, 2014)
    • Version 4.6 (July 20, 2015)
    • Version 5.0 RC1 (November 18, 2015)
    • Version 5.0 (June-ish, Q2 2016)

    How Do I Create an ASP.NET Web Application? (Or How to Develop a Website Using ASP.NET?)

    To create an ASP.NET web application, you need an IDE (Integrated Development Environment) like Visual Studio to organize all of your files for your website.
    An IDE helps in two ways:
    1. As mentioned, it organizes your files into a manageable project.
    2. It dramatically enhances your development by adding time-saving features not included in text editors.
    An IDE can create a simple boilerplate web application rapidly instead of creating every single file by hand.

    Where to Host ASP.NET Websites?

    There are way too many to put on this page. The majority of Microsoft web developers are also moving towards Azure for their hosting needs, but you could very easily host your website with someone else.
    Personally, I host my web sites on GoDaddy (afflink) and pay less than $10 a month for my "virtual real estate." Ive had two issues with them in over 15 years so I can probably cut them some slack. ;-) 
    Some of the other great ASP.NET hosting sites are:
    Next to Azure, these are the best ASP.NET hosting sites out there.

    How Do I Learn ASP.NET?

    Blogs and Microsofts Virtual Academy (free) are probably the best way to learn, but if you are looking for something more, may I recommend PluralSight. 
    PluralSight is my primary source for learning since Ive been with them for over 2 years personally.
    Another great online learning site is Udemy.com. They provide a large list of courses with very reputable instructors. I even wrote a beginners guide for ASP.NET MVC for them.
    NOTE: If you are interested in learning about ASP.NET MVC, I will be offering an ebook to show you how to create a website from scratch using ASP.NET MVC.
    Sign up to be notified when the ebook is published.

    Conclusion

    I hope these basic questions can get you started on your way to learning web development using Microsofts ASP.NET technology.
    If you have any more questions, please post them below and I will add them making this a complete list for the new web developer wanting to learn ASP.NET.
    For the veterans out there, did I miss any basic questions? What do some newbies ask you? Post your comments in the sidebar.   

    Android Studio v2.1.1 Released to Fix Security Vulnerabilities






    by Sarang Nagmote



    Category - Mobile Apps Development
    More Information & Updates Available at: http://http://insightanalytics.co.in




    Last week, Android released the Android Studio v2.1.1 update to address two security concerns found in previous versions. According to JetBrains, there were both built-in web server vulnerabilities and internal RPC vulnerabilities that could result in malicious data access. However, the updates for versions 1.5.1, 2.0, and 2.1 should reduce the security risk on the IDE. For some developers, upgrading to a higher version may not be in their best interest, so Android is offering a zip file for v1.5.2 so they can continue their work as well.
    Not only did this affect Android Studio users, it affected all IntelliJ-based IDEs v2.016.1 and older. There is a nifty table for determining what versions have been affected, making it easy to see if you need to update.
    Although Android and JetBrains said that there have been no incidents reported relating to the matter, they still worked quickly to resolve the issue, ensuring that developers can continue to work in a safe environment.
    It is highly recommended that you update as soon as possible. You can download the v2.1.1 patch here. If needed, the zip file for v1.5.2 can be found here.
    Has the security vulnerability or forced update affected your work in Android Studio in any way? Did the occurrence discourage you from using Android Studio in the future? Let us know what you think in the comments section.

    Docker and Jenkins: Orchestrating Continuous Delivery






    by Sarang Nagmote



    Category - Developer
    More Information & Updates Available at: http://http://insightanalytics.co.in




    Past week I had the honor of speaking at the Docker Barcelona Meetup about how to use Jenkins for doing typical Docker tasks like creating images, publishing them, or having a trace of what has occurred on them. Finally I introduced the new (or not so new) Jenkins Pipeline plugin, which allows you to create your continuous delivery pipeline by coding it using a Groovy DSL instead of relying on static steps like when you use FreeStyle jobs. At the end I showed the crowd how to use it with Docker.
    You can see the slides in slideshare or as html.
    We keep learning,
    Alex.