Async – await using axios

var t= new Date; var timeStr = t.toUTCString();
function getCoffee() {
return newPromise(resolve=> {
setTimeout(() =>resolve(‘☕ mocha coffeee’), 2000); // it takes 2 seconds to make coffee
});
}
asyncfunctiongo() {
try {
// but first, coffee
const coffee = await getCoffee();
console.log(coffee); // ☕
document.getElementById(‘asyncTest’).innerHTML = coffee;
// then we grab some data over an Ajax request
constwes = await axios(‘https://api.github.com/users/wesbos’);
console.log(wes.data); // mediocre code
const footerData = await axios({ url:footerLinks, method:’GET’, responseType:’json’ });
console.log(footerData.data); // mediocre code
var totalString = ‘ ONE : ‘;
for (vari = 0; i

totalString = totalString + tagContent;
}
document.getElementById(‘sectionNormal’).innerHTML = totalString;
} catch (e) {
console.error(e);
}
}
go();
<body>
<h1>Bismillah ar Rehmaan nir Raheem</h1>
<hr />
<hr />
</body>
Advertisements

Google: taking search to the next level

Arguably the most innovative company in technology and software has recently taken some big bold steps in the world of hardware. From the acquisition of HTC mobile unit to the launch of new hardware at this year’s event, Google seem to be taking on a lot more of hardware development to complement its search business.

Google’s original mission statement was “to organize the world’s information and make it universally accessible and useful”. It surely did that with the best search index (Google search), the most convenient & largest video library (YouTube), the ultimate map/guide (Google Maps) and the most popular internet browser (Chrome). One emerging big threat of Google is its ‘search’ business. What if we stop asking Google for stuff and go somewhere else with our search queries? What if we go to our preferred e-Commerce site for buying stuff or turn to Instagram to find/follow the people we like and share images or if we go to Twitter to get the latest updates? All of which will result in low volumes going to Google. To stay relevant and an important part of our lives, Google wants to proactively help us without us asking it directly, and to achieve this, it is using machine learning and AI.

Machine learning and Artificial intelligence (AI) are no longer buzz words, they are here and with advancements in processing capability, internet bandwidth and storage capacity, it is time that they can receive and process vast amounts of (audio and visual) inputs, not just text-based search queries. The real story of the Google event can best be observed through the enhancement of Google Pixel (flagship mobile device with Google Lens), Google Home (voice assistant) and introduction of the Clips, a camera which reveals Google’s focus on AI. These devices offer clue as to why Google got into the hardware business itself without relying on partners to showcase its software capabilities.

Google Pixel 2 with ‘Google Lens’: the flagship mobile device packed with the new ‘Oreo’ operating system and promises upgraded to the next 3 operating systems (instead of the two upgrade cycles), water-proofing, stereo-speakers, and a top performing camera which can argued to be the best amongst Android phones. Priced competitively to other high-end Android powered phones, it can be a serious contender to the iPhone. The camera features optical image stabilization, excellent low-light shots & to take a picture, just hold and squeeze the phone. It also features Google Lens, which mixes image-recognition and augmented-reality technology to let you point your phone camera at different objects to get more information about them, hence keeping the search business running and building upon it’s visual/image search project.

Google Home is all convenience, a device which sits silently in a corner (unless you want to blast music through its improved speakers), waiting for our instructions, queries & feedback. Although, our smartphones were well-equipped to do it for years, the level of accuracy has gone to a different level because the device needs to be on all the time. There is an argument that many companies listened and watched us secretly, which led to all software seeking our explicit approval before they could access our camera, contacts or microphone. With Google Home, we are explicitly allowing Google to be a family member and listen in. With hands-free convenience and low prices, its aim is to ensure that Google knows what’s going on in world – even though we might take searches elsewhere.

Google Clips is a stand-alone camera with intelligent software which decides which moments must be captured itself, no kidding, it doesn’t even need internet access to function – and it also gets smarter over time in light of our feedback over the pictures that it took, it is ground-braking in the sense that it doesn’t need an internet connection to work, the algorithm which decides what to capture, what to ignore is built into the tiny device itself. The wonderful thing with these devices will be the accuracy, something which builds upon study of billions of search queries from all over the world and specifically our history and our interactions with them. The technology powering these devices is building on learning from our smartphone interactions, years of study of what we ourselves posted on Facebook, Twitter, Instagram profiles, likes, shares etc.

Google is expanding on its search business with these devices and will surely hope that other hardware vendors get on the bandwagon to further accelerate the innovation in these key areas. Apart from Apple & Facebook, the only other company which can capture and process such an enormous amount of data is Google and with its latest offerings, we are allowing it unrestricted access like never before.

SharePoint website: The pursuit of agile front-end development

SharePoint website: The pursuit of agile front-end development

SharePoint is an amazing tool, the sheer breadth of options coupled with the familiarity which it has to other popular products in the office suite (MS Word, MS Excel etc.) makes it a top choice for corporations looking for a content management system (CMS). Designing a website on SharePoint can be difficult because of the great leaps made by the competition. WordPress, Drupal and others have taken the benchmark for design of a CMS to a whole different level. There are ready made themes which are beautiful and then a wide variety of off-the-shelf themes is even more inviting.

In a world of responsive design, with its many moving parts requiring a close eye, if you throw in a tentative and inexperienced team, with a large CMS like Microsoft SharePoint to the mix, things get interesting. And the thought of such a challenge can make many vendors shy away (obviously, many vendors take on this challenge, but some that I know of didn’t fancy it).

To get such a project to take off and complete on time can be tricky. Decisions on design are dictated by many organizational and often personal factors. Negotiating the back and forth and bringing the teams to a workable compromise requires lateral thinking, quick learning and hard work.

Some shortcomings of the existing mechanism

For a user interface (UI/UX) component, frequent changes would result in frequent deployments, which require a deployment each time resulting in a minor website down time. In an environment where each such iteration can result in outages, which in turn can harm the corporate brand, we needed to have a better solution.

UI/UX development is complex and considering the fact that most of the design elements are primarily visual elements (Examples like image sliders, navigation etc.) which display information (more on forms and elements which get and store information from the users later), and are updated frequently, we needed something which was truly agile, something which will be fast during development, accommodating to frequent changes, and can be updated without any disruption – all within the SharePoint ecosystem.

Content editor to the rescue

Problem with coding such design elements was that it took longer to update, compile, deploy and then test and it needed a server class machine and developers with enough knowledge. We always knew that we can manipulate elements via ‘content editor web-parts edit HTML feature’, but we didn’t think about using it for actual development.

We started creating (HTML 5, CSS 3, Bootstrap and JavaScript – the things which power the beautiful visual elements) code in text files, plugging them into content editor web-parts and would have instant results and quick feedback from the business users.

We could change the file (which is referred in the content editor) as many times as we wanted without causing any disruption to normal business. These text files would have HTML tags, script and style blocks, which makes everything easy to manage on the development and user testing environments during the seemingly never ending ‘discovery and R&D’ phases.

The learned users could then orient themselves on where to make changes in these ‘text files’ and run with it. The days of outages for each and every design change go down drastically.

Published originally on LinkedIn on August 7, 2016