Latest News

Wednesday, April 25, 2018

Software: Database development vs modern computing


The paramount concern for all software developers should always be striving towards a well-structured, quality codebase or else risk ending up with inconsistent, nonsensical code and constant requests against a database.
Modern computing power means many more calculations in a shorter space of time, which has the unfortunate effect of inefficient data access code not actually being shown up by the efficient code as it should be.
This becomes a compound issue with an ORM or an abstracted model, where the code you write doesn’t explicitly point to the fact that the method may select half the database, regardless of your requirement of it.
This may result in unintentionally inefficient code, but if the code runs then why bother concerning yourself with the burden that has been placed within the context of what you just wrote?
Efficient code, when it’s well written, thought out and refactored or re-engineered appropriately, is more maintainable and therefore less confusing to understand the intended implementation. It results in less strain on the database, and the quicker an application can respond to the user’s requests, the more trustworthy and useful it becomes.
This doesn’t mean go and denormalise all your databases; it means take time to think about how you implement the interaction with the database and strive to understand the implications of the code you write. Consider the delay it might cause in growing or already large datasets, slower connections, and the database developer who might be maintaining it in the future.
This also allows for better scalability, regardless of whether or not this was intended originally, bearing in mind that even a little-used application will have a growing database over time. This may sound like it will extend development time, but I firmly believe that the time is wisely spent due to the clarity of the final product.
The evidence of this can be painfully felt when testing, debugging, maintaining and expanding upon code that has not been crafted but rather fired out, coding by coincidence.
If a development team or individual can take the time to consider the words they weave and how the construction takes place, it will directly benefit the individual and should exponentially benefit the team and everyone who happens upon your code in the future.


Tuesday, April 24, 2018

Netflix and Google launch Kayenta open source canary tool



An open source tool for automated deployment monitoring has been launched by Netflix and Google to help other companies modernise their practices.
Kayenta is a form of ‘canary analysis’ tool which aims to detect problems before they become a serious issue. Fun fact: Coal miners would once take canaries in cages down into the pits as they are especially sensitive to dangerous gases — if a canary dies, the miners knew to make a quick exit.
Netflix first began development on Kayenta for internal use but decided it wanted to release it to a wider audience. Much of the code was specific to Netflix, so the company enlisted the help of Google to rewrite parts of it and make it modular. The teams spent about a year undertaking this effort.
Greg Burrell, Senior Reliability Engineer at Netflix, says:
"Automated canary analysis is an essential part of the production deployment process at Netflix and we are excited to release Kayenta. Our partnership with Google on Kayenta has yielded a flexible architecture that helps perform automated canary analysis on a wide range of deployment scenarios such as application, configuration and data changes.
By the end of the year, we expect Kayenta to be making thousands of canary judgments per day. Spinnaker and Kayenta are fast, reliable, and easy-to-use tools that minimise deployment risk while allowing high velocity at scale."
The result is a flexible tool which is going to help businesses of all sizes improve their deployments. Big companies have the budgets and expertise to build a bespoke solution for their needs, but this still takes a lot of time.
Tom Feiner, Systems Operations Engineer at Waze, comments:
“Canary analysis along with Spinnaker deployment pipelines enables us to automatically identify bad deployments. With 1000+ pipelines running in production, any form of human intervention as a part of canary analysis can be a huge blocker to our continuous delivery efforts.
Automated canary deployment, as enabled by Kayenta, has allowed our team to increase development velocity by detecting anomalies faster. Additionally, being open source, standardizing on Kayenta helps reduce the risk of vendor lock-in.”
In today’s world, companies know they need to move fast. Startups typically perform better here because they are more nimble. Continuous software development practices break larger projects into smaller parts so directions can be changed more quickly if needed, but deployments can often be rushed and face problems.
Kayenta, like other canary analysis tools, will run checks to quickly ensure no problems will be faced when an upgrade is fully deployed. The system is objective and immune to any human error and potential bias involved with a manual canary test.

Agile initiatives expanding in the enterprise – but lots more work to be done



Agile is expanding within the enterprise – but there is plenty more that can be done to improve organisational initiatives.
That’s the key finding from enterprise software development firm CollabNet. In the company’s latest State of Agile report – the 12th iteration – which collected almost 1,500 responses from various industries in softwaredevelopment, 97% of respondents’ organisations practiced agile development methods. Of that number, 52% said that more than half of their teams were using agile practices in their organisation.
Those who have taken the plunge cite improvements in their ability to manage changing priorities – cited by 71% of those polled – compared with better project visibility (66%), greater alignment between business and IT (65%), and quicker delivery speed and time to market (62%). Greater team productivity and team morale were also highly cited.
DevOps initiatives are also on the rise, with almost half (48%) saying they have an initiative currently underway with 23% at the planning stage. The most popular method of assessing the success of DevOps initiatives was accelerated delivery speed, cited by 58% of those polled, ahead of improved quality (51%) and an increased flow of business value to users (44%).
Yet only 12% of those polled said their organisations had a high level of competency across the organisation, with an even smaller number (4%) saying agile practices were enabling greater adaptability to market conditions.
Plenty still needs to be done, therefore, to get organisations up to speed. “The need to manage the entire value stream, from strategic planning to customer delivery, and to have the proper feedback loops in place, will be critical going forward,” the report notes. “This will require the organisational culture, skills, and tooling to allow for managing and measuring the flow of business value across the entire value stream.
“To achieve this, enterprises will need to truly unify their agile portfolio planning, agile project management, and continuous delivery efforts.”

Source: developer-tech

Tags

Recent Post