1. Tracking Application Response Time with NGINX, Filebeat and Elastic Search

    Recently we needed to enable Response Time monitoring on NGINX server. Let me try to summarise steps needed to bring response times from NGINX into Elastic Search.

    NGINX Configuration

    In order to do so we had to define a new log format. That topic is covered in much detail at lincolnloop.com back in Nov 09, 2010! In short you need to add log format into nginx.conf
    log_format timed_combined '$remote_addr - $remote_user [$time_local] '
        '"$request" $status $body_bytes_sent '
        '"$http_referer" "$http_user_agent" '
        '$request_time $upstream_response_time $pipe';
    
    Next step is to modify access_log directives to use the new format:
    access_log /var/log/nginx/yourdomain.com.access.log timed_combined;
    
    Once configuration files have been updated run nginx -t to test them. If NGINX likes your new configuration run nginx -s reload so it will start using them.

    Filebeat Configuration

    Filebeat is a lightweight shipper for logs. We are using it to deliver logs to Elastic Search cluster. To review logs and mertics we are using Kibana. Filebeat is using grok patterns to parse log files. Basically all you need is to update a grok pattern which is being used by Filebeat to parse NGINX logs. In my case it’s located at
    /usr/share/filebeat/module/nginx/access/ingest/pipeline.yml
    
    I added a new line to the end of the patterns: definition:
    %{NUMBER:http.request.time:double} %{NUMBER:upstream.request.time:double} %{DATA:pipelined}
    
    Which is what I’ve got after that
      ...  
      patterns:
        - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address})
          - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}"
          %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long}
          "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})"
          %{NUMBER:http.request.time:double} (-|%{NUMBER:http.request.upstream.time:double}) %{DATA:http.request.pipelined}
      ...
    
    • http.request.time variable represents full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body.
    • http.request.upstream.time variable represents time between establishing a connection to an upstream server and receiving the last byte of the response body.
    • http.request.pipelined variable has “p” if request was pipelined, “.” otherwise.
    Please note that you can name these new variables as you would like. For example instead of http.request.time it could be named as requesttime.

    Filebeat pipeline update

    Please not that once you updated pipeline.yml file you will need to make Filebeat to push it to Elastic Search. You have several options here:
    1. You can run filebeat setup command which will make sure everything is up-to-date in Elastic Search.
    2. You can remove index manually from Elastic Search by running DELETE _ingest/pipeline/filebeat-*-nginx* command. Then start Filebeat - it will setup everything during start-up procedure.


  2. Deployment Group provision in Azure Dev Ops (On Premise)

    We are a long time users of Team Foundation Server (TFS). As you may know recently it’s been renamed into Azure Dev Ops. I absolutely love the new “Dev Ops” version (we are running v. 17.M153.5 by the way). But we faced two issues with it, so I’d like to document these here. 1. Build Agent registration If you need to register Build Agent, you have to include Project Collection Name into the url. For example previously it worked fine if you specify https://tfs.example.com/tfs/. But with Azure Dev Ops you have to include https://tfs.example.com/tfs/FooBar/ (FooBar is a collection name here). Otherwise you will get Client authentication required error. 2. Deployment Agent regionstration If you need to register agent into Deployment Group, you need to modify the PowerShell script a bit. In particular you have to add --unattended --token {PAT_TOKEN_HERE} So instead of the command below which is part of the Registration script in Dev Ops “Deployment Group” screen. .\config.cmd --deploymentpool --deploymentpoolname "DEV" --agent $env:COMPUTERNAME --runasservice --work '_work' --url 'https://tfs.example.com/tfs/' it should be something like this .\config.cmd --deploymentpool --deploymentpoolname "DEV" --agent $env:COMPUTERNAME --runasservice --work '_work' --url 'https://tfs.example.com/tfs/' --unattended --token {PAT_TOKEN_HERE} Otherwise you will be asked to provide url to DevOps again and then get Not Found error if you try to include Collection Name into Url. As I understand the second issue related to the same root case as a first one - without --unattanded flag it was complaining about the https://tfs.example.com/tfs/ url. Then I included Collection Name in the url it was showing “Not Found” error because collection name appeared twice: https://tfs.example.com/tfs/{COLLECTION_NAME}/{COLLECTION_NAME}/_apis/connectionData?connectOptions=1&lastChangeId=-1&lastChangeId64=-1 failed. HTTP Status: NotFound Similar issue discussed at https://github.com/microsoft/azure-pipelines-agent/issues/2565#issuecomment-555448786


  3. OpenSSL saves the day

    We needed to issue a tiny patch release for one of our legacy applications. To do so we had to order a new code-signing certificate. I was a bit surprised then build failed with Invalid provider type specified error. For the some reason it was failing to sign Click Once manifest. What’s interesting signtool.exe was able to use that certificate just fine… I was lucky enough to find amazing blog post at https://remyblok.tweakblogs.net/blog/11803/converting-certificate-to-use-csp-storage-provider-in-stead-of-cng-storage-provider I faced an issue thought… I was not able to find pvk.exe because the Dr. Stephen N Henson’s website (at http://www.drh-consultancy.demon.co.uk/pvk.html) was down and I found no mirrors out there… So I used a bit different approach to tackle it:

    1. I used OpenSSL to generate PVK out of PEM using the command below openssl rsa -inform PEM -outform PVK -in demo.pem -out demo.pvk -passin pass:secret -passout pass:secret
    2. Then I used OpenSSL to generate PFX out of PVK & CER files (I had to export public key as Base-64 encoded X.509 (.CER) at first for below command to work properly) openssl pkcs12 -export -out converted.pfx -inkey demo.pvk -in demo.cer -passin pass:secret -passout pass:secret


  4. Let's Encrypt or HTTPS for everyone

    It’s a year since we are using free certificates on some of our production servers. So I decided to put together a tiny article highlighing how easy is to make connections to your server secure using Let’s Encrypt:

    Let’s Encrypt

    To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain. With Let’s Encrypt, you do this using software that uses the ACME protocol, which typically runs on your web host. More details at https://letsencrypt.org/getting-started/

    ACME Client for Windows - win-acme

    To enable HTTPS on IIS website all you have to do is below 3 steps:
    1. Find out Site ID in IIS (Open IIS Manager and click on the “Sites” folder)
    2. Download a Simple ACME Client for Windows
    3. Run ACME Client (letsencrypt.exe) passing Site ID and Email for notifications
    For example if you Site ID is 1 and email for notifications is john.doe@example.com the command will look like this: letsencrypt.exe --plugin iissite --siteid 1 --emailaddress john.doe@example.com --accepttos --usedefaulttaskuser


  5. Group Policies which could affect your Web Application

    We are working on a web application which heavily depends on the following browsers’ features:

    • Application Cache - it allows websites to ask browser to cache them, so that users are able to open these websites offline.
    • Indexed DB - it allows websites to store data in the browser cache, so that all needed data will be available offline.
    • Web Storage - it allows websites to store settings in the browser cache.

    Group Policy

    It’s common for Enterprises to adjust default IE 11 settings using Group Policies. In such cases some of the functionality will not be available. For example website may fail to work offline if it’s unable to store data into browser’s cache. We prepared a list of the settings which might have impact on the websites utilizing above browser features.

    Edge

    • Computer Configuration -> Administrative Template -> Windows Components -> Microsoft Edge
      • Allow clearing browsing data on exit. Not Configured by default. If Enabled could cause a data loss, also users won’t be able to open application offline.

    IE 11

    • Computer Configuration -> Administrative Template -> Windows Components -> Internet Explorer -> Internet Control Panel -> Advanced Page
      • Empty Temporary Internet Files folder when browser is closed Disabled by default.
    • Computer Configuration -> Administrative Template -> Windows Components -> Internet Explorer -> Internet Control Panel -> General Page -> Browsing History
      • Allow websites to store application caches on client computers. Enabled by default.
      • Set application caches expiration time limit for individual domains. The default is 30 days.
      • Set maximum application cache resource list size. The default value is 1000.
      • Set maximum application cache individual resource size. The default value is 50 MB.
      • Set application cache storage limits for individual domains. The default is 50 MB.
      • Set maximum application caches storage limit for all domains. The default is 1 GB.
      • Set default storage limits for websites. Not Configured by default.
      • Allow websites to store indexed databases on client computers. Enabled by default. Required for the application to be available offline.
      • Set indexed database storage limits for individual domains. The default is 500 MB.
      • Set maximum indexed database storage limit for all domains. The default is 4 GB.


  6. git-crypt - transparent file encryption in git

    Here at Compellotech we are using Octopus to automate all of our deployments for several years now. Recently we started to accommodate Infrastructure as Code (IAC) approach to simplify environments management. It allows us to spin new environments right from Octopus dashboard. We are using Azure Key Vault to store secret data (such as SSL Certificates). And I just came across an interesting alternative git-crypt. It looks very convenient.

    git-crypt enables transparent encryption and decryption of files in a git repository. Files which you choose to protect are encrypted when committed, and decrypted when checked out.


  7. SQL Server Managed Backup to Microsoft Azure

    Recently we migrated one of our projects to SQL Server 2016. As part of migration we enabled TDE for some databases. Next step was to configure backups. On our old SQL Server 2008 we already used to backup to Azure. It’s very convenient! So we were happy to use Managed Backup feature of SQL Server 2016. There is really good step-by-step tutorial on how to setup it on MSDN I just want to note that then you configure “instance level” backups, keep in mind that you will have to apply the same settings to existing databases manually. So it makes sense to first configure “Instance Level” backup settings and then restore your databases. It might save you a bit of time. It was a breeze to configure Managed Backup… very smooth experience. Highly recommend! …


  8. Rethink DB - The open-source database for the real-time web

    Couple of months ago I came across Rethink DB - The open-source database for the real-time web. I’m really interesting about real-time web tools and technologies. Last year I played with Meteor. And I still think it’s pretty nice framework. It’s great especially for simple projects. What I don’t like about Meteor is that you have to opt-in into all decisions them made. For example, you have to use MongoDB (at least at the moment) you can’t use npm packages (at least at the moment). As I know Meteor team is moving to elaborate these issues. As for Firebase it’s great, but again you have to opt-in and there is a possibility that you’ll have to switch from it at some point if your project does not fit well anymore. I’m looking into the stack which allows rapid development of real-time apps and in the same gives me all options. So I can easily make any decisions which fits better for the given project. That’s why Rethink DB looks so interesting. First of all it’s a powerful, easy to use and configure document database. You can configure sharding and replication in a few clicks. You can create cluster very easily again using fancy web UI!
    More other Rethink DB allows you to subscribe to change notifications. For example, NodeJS application would subscribe to changes in messages table in just a few lines and then it will push changes to clients using socket.io. Another use case is to send data into Elastic Search to allow full text search. The great thing is all aspects are under you control. You decide what exactly send to Elastic Search, so instead of sending the whole document you just send fields you want to be searchable. Same way you decide what to send do clients and you can easily customize that at any point. If you’d like to learn more about Rethink DB there is a great RethinkDB Fundamentals course at Pluralsight RethinkDB team recently released Horizon - realtime, open-source backend for JavaScript apps. As you can expect it’s using RethinkDB as a central component. …


  9. Smart Screen & EV Code Signing

    Recently our QA team started to get “Windows protected your PC” messages from the Windows SmartScreen. They saw that message each time they launch the app I’m working on at the moment. That warning message even didn’t display the Publisher correctly. We were able to make Smart Screen to show Publisher correctly by signing our application by the SHA-256 code signing certificate (we used SHA-1 originally). As for the warning message itself, we had to buy Extended Validation Code Signing Certificate to get rid of it. I want to note that you can keep using signtool.exe to sign binaries. But you can’t do that from the automated build scripts because you have to provide a password. That’s why we had to update our deployment strategy to make this to work. We added manual intervention step (we are using Octopus by the way) which allows us to sign binaries (by running a script and providing password). More details at stackoverflow Also mage.exe didn’t play well with EV Code Signing certificate - it didn’t ask for the password. As a result manifest was not signed correctly. Just in case here’s the warning message we were getting:

    <strong>Windows protected your PC</strong>
    Windows SmartScreen prevented an unrecognized app from starting. Running this app might put your PC at risk
    
    App: {OurAppName}.exe
    Publisher: Unknown
    


  10. Front-End development

    I did some research lately on ECMAScript 6 and ES6 Module Loader Polyfill and I’m very exited! That’s pretty cool how easy it is to use latest technologies (ES2015 or TypeScript) to develop browser applications and let the tools to transform latest & greatest into the JavaScript understood by the browser. Things such as jspm… that’s all so exiting!