Creating a bootable VMware ESXi USB drive in Windows

kvsanistock-674109338.jpg

Virtualization is a big player in IT these days, regardless of the sector you’re in. Most businesses can benefit from consolidating servers—and to a greater degree, converging server, storage, and network infrastructures for centralized management and scalability.

With regard to consolidating servers by virtualizing them, the industry standard is VMware, with its extensive software and support offerings for businesses of all sizes. It even has a free offering—ESXi—which is its base hypervisor that can be run on any bare-metal supported server hardware to get IT pros familiar with the product and help organizations on their way to migrating their servers to virtual machines.

While many newer servers have added modern touches to facilitate VMware deployments, such as internal SD card readers for loading the hypervisor onto the SD to maximize all available resources, these servers have also done away with legacy items, like optical drives—and that makes loading VMware onto the servers a bit difficult initially.

But fret not, as USB flash drives (UFDs) have proven to be more than capable at replacing optical media for booting operating systems. And given their flexible read/write nature, even updating installers is a breeze using the very same UFD.

Read on and we’ll cover the steps necessary to create a bootable UFD, with VMware ESXi on it, from your Windows workstation. However, before jumping into this, there are a few requirements:

  • Windows workstation (running XP or later)
  • Rufus
  • VMware ESXi ISO
  • USB flash drive (4GB minimum)
  • Internet access (optional, but recommended)

SEE: VMware vSphere: The smart person’s guide (TechRepublic)

Creating the USB installer

    Start by inserting your UFD into the Windows computer and launching Rufus. Verify that under Device, the UFD is listed (Figure A).

    Figure A

      In the next section, Partition Scheme, select MBR Partition Scheme For BIOS Or UEFI from the dropdown menu (Figure B).

      Figure B

        Skip down to the CD icon and click on it to select the previously downloaded VMware ESXi ISO image (Figure C).

        Figure C

      Finally, click on the Start button to begin the process of formatting and partitioning the UFD and extracting the contents of the ISO to your USB drive. Please note that any data on the drive will be erased (Figure D).

      Figure D

      The transfer process will vary depending on the specifications of your workstation, but typically it should be completed within several minutes. During this process, you may be prompted to update the menu.c32 file, as the one used by the ISO image may be older than the one used by Rufus on the flash drive. If this occurs, click Yes to automatically download the newest compatible version from the internet. Once the process is complete, your USB-based VMware ESXi installation media will be created and ready to boot the hypervisor setup on your server.

      Note: If the USB will not boot on your server, ensure that the USB boot functionality is enabled in the BIOS or UEFI listing. In addition, before the ISO is downloaded from VMware, it is highly recommended that you peruse VMware’s Compatibility Guide on its website, which allows users to verify that their hardware is supported by VMware and for use with its products. If not, perhaps a previous version of ESXi may be a better fit or supplemental drivers may be needed to be included before your specific server will boot properly.

      Edge computing: The smart person’s guide

      Many companies want Internet of Things (IoT) devices to monitor and report on events at remote sites, and this data processing must be done remotely. The term for this remote data collection and analysis is edge computing.

      Edge computing technology is applied to smartphones, tablets, sensor-generated input, robotics, automated machines on manufacturing floors, and distributed analytics servers that are used for “on the spot” computing and analytics.

      Read this smart person’s guide to learn more about edge computing. We’ll update this resource periodically with the latest information about edge computing.

      SEE: All of TechRepublic’s smart person’s guides

      Executive summary

      • What is edge computing? Edge computing refers to generating, collecting, and analyzing data at the site where data generation occurs, and not necessarily at a centralized computing environment such as a data center. It uses digital devices—often placed at different locations—to transmit the data in real time or later to a central data repository.
      • Why does edge computing matter? It is predicted that by 2020 more than five million smart sensors and other IoT devices will be in use around the world, and these devices will generate at least 507.5 zettabytes of data. Edge computing will help organizations handle this volume of data.
      • Who does edge computing affect? IoT and edge computing are used in a broad cross-section of industries, which include hospitals, retailers, and logistics providers. Within these organizations, executives, business leaders, and production managers are some of the people who will rely on and benefit from edge computing.
      • When is edge computing happening? Many companies have already deployed edge computing as part of their IoT strategy. As the numbers of IoT implementations increase, edge computing will likely become more prevalent.
      • How can our company start using edge computing? Companies can install edge computing solutions in-house or subscribe to a cloud provider’s edge computing service.

      SEE: Ultimate Data & Analytics Bundle of Online Courses (TechRepublic Academy)

      bigdataistock-496699834allanswart.jpg



      What is edge computing?

      Edge computing is computing resources (e.g., servers, storage, software, and network connections) that are deployed at the edges of the enterprise. For most organizations, this requires a decentralization of computing resources so some of these resources are moved away from central data centers and directly into remote facilities such as offices, retail outlets, clinics, and factories.

      Some IT professionals might argue that edge computing is not that different from traditional distributed computing, which saw computing power move out of the data center and into business departments and offices several decades ago. Edge computing is different because of the way edge computing is tethered to IoT data that is collected from remote sensors, smartphones, tablets, and machines. This data must be analyzed and reported on in real time so its outcomes are immediately actionable for personnel at the site.

      IT teams in every industry use edge computing to monitor network security and to report on malware and/or viruses. When a breach is detected at the edge, the menaces can be quarantined, thereby preventing a compromise of the entire enterprise network.

      From a business standpoint, here is how various industries use edge computing.

      • Corporate facilities managers use IoT and edge computing to monitor the environmental settings and the security of their buildings.
      • Semiconductor and electronics manufacturers use IoT and edge computing to monitor chip quality throughout the manufacturing process.
      • Grocery chains monitor their cold chains to ensure that perishable food requiring specific humidity and temperature levels during storage and transport are maintained at those levels.
      • Mining companies deploy edge computing with IoT sensors on trucks to track the vehicles as they enter remote areas. These companies also use edge computing to monitor equipment on the trucks in an attempt to prevent goods in transit from being stolen for resale in the black market.

      IoT and edge computing are used in a broad cross-section of industries, which include the following.

      • Logistics providers use a combination of IoT and edge computing in their warehouses and distribution centers to track the movement of goods through the warehouses and in the warehouse yards.
      • Hospitals use edge computing as a localized information collection and reporting platform in their operating rooms.
      • Retailers use edge computing to collect point of sales data at each of their stores, and then they transmit this data later to their central sales and accounting systems.
      • Edge computing that collects data generated at a manufacturing facility could monitor the functioning of equipment on the floor and issue alerts to personnel if a particular piece of equipment shows signs that it is failing.
      • Edge computing, combined with IoT and standard information systems, can inform production supervisors whether all operations are on schedule for the day. Later, all of this data that is being processed and used at the edge can be batched and sent into a central data repository at the corporate data center where it can be used for trend and performance analysis by other business managers and key executives.

      For IT, edge computing is not a slam-dunk proposition—it presents significant challenges, which include:

      • The sensors and other mobile devices deployed at remote sites for edge computing must be properly operated and maintained;
      • Security must be in place to ensure that these remote devices are not compromised or tampered with;
      • Training is often required for IT and for company operators in the business so they know how to work with edge computing and IoT devices;
      • The business processes using IoT and edge computing must be revised frequently; and
      • Since the devices on the edge of the enterprise will be emitting data that is important for decision makers throughout the company, IT must devise a way to find sufficient bandwidth to send all of this data (usually over internet) to the necessary points in the organization.

      Additional resources

      Why does edge computing matter?

      It is projected that by 2020 there will be 5,635 million smart sensors and other IoT devices employed around the world. These smart IoT devices will generate over 507.5 zettabytes (1 zettabyte = 1 trillion gigabytes) of data.

      By 2023, the global IoT market is expected to top $724.2 billion. The accumulation of IoT data, and the need to process it at local collection points, is what’s driving edge computing.

      Businesses will want to use this data—the catch is the data that IoT generates will come from sensors, smartphones, machines, and other smart devices that are located at enterprise edge points that are far from corporate headquarters. This IoT data can’t just be sent into a central processor in the corporate data center as it is generated, because the volume of data that would have to move from all of these edge locations into HQs would overwhelm the bandwidth and service levels that are likely to be available over public internet or even private networks.

      SEE: Internet of Things Policy (Tech Pro Research)

      As organizations move their IT to the “edges” of the organization where the IoT devices are collecting data they are also implementing local edge commuting that can process this data on the spot without having to transport it to the corporate data center. This IoT data is used for operational analytics at remote facilities; the data enables local line managers and technicians to immediately act on the information they are getting.

      Companies need to find ways to utilize IoT that pay off strategically and operationally. The greatest promise that IoT brings is in the operational area, where machine automation and auto alerts can foretell issues with networks, equipment, and infrastructure before they develop into full-blown disasters.

      For instance, a tram operator in a large urban area could ascertain when a section of track will begin to fail and dispatch a maintenance crew to replace that section before it becomes problematic. Then the tram operator could notify customers via their mobile devices about the situation and suggest alternate routes. Great customer service helps boost revenues.

      Additional resources

      Who does edge computing affect?

      Edge computing as a way of managing incoming data from IoT will affect companies of all sizes in virtually every public and private industry sector.

      Projects can be as modest as placing automated security monitoring on your entryways, to monitoring vehicle fleets in motion, controlling robotics during telesurgery procedures, or automating factories and collecting data on the quality of goods being manufactured as they pass through various manufacturing operations half a globe away.

      One driving factor is a focus on IoT by commercial software vendors, which are increasingly providing modules and capabilities in their software that exploit IoT data. Subscribing to these new capabilities doesn’t necessarily mean that a company has to invest in major hardware, software, and networks, since so many of these resources are now available in the cloud and can be scalable from a price point perspective.

      Companies that do not take advantage of the insights and actionability that IoT and edge computing can offer will likely be at a competitive disadvantage in the not so distant future. The tram use case cited earlier in this article is an excellent example: What if you operated a tram system, and you did not have IoT insights into the condition of your tracks or the ability to send messages to customers that advise them of alternate routes? What if your competitor has these capabilities? You would be at a competitive disadvantage.

      Additional resources

      When is edge computing happening?

      Companies in virtually every public and private industry sector are already using IoT technology with an edge computing approach. Tech Pro Research in 2016 revealed that over half of all companies surveyed (which included large enterprises to very small businesses) had either implemented or were planning to implement IoT in the next 12 months; many of these organizations will use edge computing with their IoT strategies. (Tech Pro Research is a sister site of TechRepublic.) Regardless of where a company is with its IoT implementation, edge computing should be on every enterprise IT strategic roadmap.

      Major IT vendors are the primary enablers of edge computing because they will be pushing their corporate customers to adopt it. These vendors are purveying edge solutions that encompass servers, storage, networking, bandwidth, and IoT devices.

      Affordable cloud-based solutions for edge computing will enable companies of all sizes to move computers and storage to the edges of the enterprise. Cloud-based edge computing vendors also know the best ways to deploy edge computing with IoT to optimize results for the business.

      Additional resources

      How can our company start using edge computing?

      Businesses can implement edge computing either on premise as a physical distribution of servers and data collection devices, or through cloud-based solutions. Intel, IBM, Nokia, Motorola, General Electric, Cisco, Microsoft, and many other tech vendors offer solutions that can fit on-premise and cloud-based scenarios. There are also vendors that specialize in the edge computing needs of particular industry verticals and IT applications, such as edge network security, logistics tracking and monitoring, and manufacturing automation. These vendors offer hardware, software, and networks, as well as consulting advice on how to manage and execute an edge computing strategy.

      SEE: Free ebook—Digital transformation: A CXO’s guide (TechRepublic)

      To enable a smooth flow of IoT generated information throughout the enterprise, IT needs to devise a communications architecture that can facilitate the real-time capture and actionability of IoT information at the edges of the enterprise, as well as figure out how to transfer this information from enterprise edges to central computing banks in the corporate data center. Companies want as many people as possible throughout the organization to get the information so they can act on it in strategically and operationally meaningful ways.

      Additional resources

      Worried about ransomware? Here are 3 things IT leaders need to know before the next big outbreak

      Ransomware: It’s a fast-growing form of malware that has the potential to disrupt business in a huge way. Some, like the recent WannaCry outbreak, even have the potential to spread from computer to computer.

      Ransomware’s continued spread may be due to how simple it is to use. After all, why do all the work of harvesting information from a victim when you can just wait for them to send you money?

      In short, it’s dangerous, it’s spreading, and it could disrupt your entire network. If you’re a tech decision maker you need to educate yourself on this growing threat. Eric Ogren, cybersecurity analyst at 451 Research, spoke with TechRepublic about some important things you may not know about Ransomware.

      1. Ransomware operations are sophisticated, but not cutting edge

      It’s not just a couple of people sitting in a cramped room trying to steal your money—ransomware operations can get pretty sophisticated. “Some ransomware organizations even have help lines if you aren’t sure how to use Bitcoin,” Ogren said. “They’re surprisingly sophisticated.”

      SEE: Despite hype, ransomware accounted for only 1% of malicious programs in 2016, according to report (TechRepublic)

      Sophistication doesn’t mean ransomware companies are on the cutting edge of advanced technology, though. When it comes down to it ransomware campaigns, however advanced their encryption methods and worm capabilities, are using old, well-known tricks to proliferate.

      Phishing attacks are the most common methods of spreading ransomware, Ogren said, “because it’s so much easier. Why waste time trying to write scripts to break through security when you can just rely on a person to make a mistake?”

      Once a piece of ransomware is on a system it isn’t doing anything unknown either: Most are exploiting well-known, and likely already patched, flaws. Recent ransomware outbreaks have been perfect examples of this exploit—those who fell prey to WannaCry and Petya were all lacking an essential security patch that Microsoft had released in March 2017.

      2. Most people don’t pay

      Of those hit by ransomware, according to 451 Research (note: study is behind paywall), 81% don’t pay. Instead, they simply reimage the affected machines, and the majority of those restore from backups that minimize data loss.

      Excepting ransomware that worms its way into the BIOS, wiping and reimaging is certainly the best solution for getting rid of ransomware. That’s cold comfort for those who don’t back up their data regularly, though, so make sure you have a solid backup plan in place.

      “It’s not a question of will you get hit by ransomware or other malware,” Ogren said. “It’s a question of when.” He added that businesses should never pay ransoms—all that does is encourage ransomers to keep trying.

      If you’ve heard that companies are stockpiling Bitcoins in anticipation of paying ransoms, you may have considered such a drastic move, but don’t—funnel those funds toward establishing a good backup protocol instead.

      SEE: Gallery: 10 free backup applications to help you prevent disaster (TechRepublic)

      3. There is no silver bullet against ransomware

      “You can do everything right,” Ogren said, “and still end up getting an infection.” Ransomware exploits people in order to spread, and therein lies the problem: Computers can be patched, but a person just needs to see a phishing message that seems like it’s real.

      Minimizing your chances of getting ransomware is the most you can do. That includes:

      • Keeping systems up to date
      • Training users to spot suspicious emails
      • Making sure your IT department or MSP is ready for an attack
      • Backing up computers and essential data
      • Disabling hyperlinks in email so users can’t open phishing messages
      • Blocking attachments from unverified sources

      Search for every possible malware ingress point and shut them all down. It won’t guarantee your safety, but nothing really will. All you can do is minimize your risks.

      Top three takeaways for TechRepublic readers:

      1. Some 81% of ransomware victims just wipe and reimage their machines. Make this your go-to plan when prevention fails, and make it practical by ensuring everything is backed up regularly.
      2. Ransomware operations may be sophisticated, but the software isn’t. Most (including WannaCry and Petya) exploit security holes that are known, and in many cases ones that have already been patched. Those who keep up with updates won’t fall victim.
      3. Ransomware prevention is never a guarantee—there’s always the potential for an infection. All you can do is stay on top of good security practices.

      South Australian police look to VR to improve firearms training

      The South Australian Police Academy has deployed a new virtual reality (VR) training simulator to assist officers in training with firearms in real-world scenarios. According to an official statement, it allows for training without ammunition, thereby increasing safety, and reducing cost and environmental impact.

      The VirTra training system offers a 300-degree of simulated scenarios, utilizing multiple screens, 3D audio, and special effects, the statement said. In addition to testing firearms use, the simulator is also meant to test “factors such as communication skills, de-escalation and appropriate responses,” South Australia Police deputy commissioner Linda Williams said in the statement.

      The new system is the first tech update that the training facility has received in roughly 10 years. Williams said in the statement that it could lead to more interactive scenarios and enhance existing response training efforts.

      SEE: Virtual reality for business: The smart person’s guide

      The training scenario has a live fire range as well. And since it is indoors, that means that officers can still train regardless of potential inclement weather.

      The VirTra simulator, which set the police force back $480,000, has been in use since June 2017. Now, however, it is being made available for other groups, such as the South Australian Police’s Special Tasks and Rescue (STAR) group, to train on as well. STAR is typically tasked with handling situations deemed more high-risk, Williams said in the statement.

      The system has also been used by other jurisdictions, and in conjunction with other systems, the statement noted, enabling groups to share training programs and insights. “This is a welcomed upgrade of the system already used to train police in the use of a full range of tactical options including firearms, electronic control devices and capsicum spray,” Police Minister Peter Malinauskas said in the statement.

      Specialized training is one of the major use cases for VR in the enterprise. Industries such as education, skilled trades, and even NASA use VR to train their employees and experts of scenarios that could be difficult to replicate through other mediums.

      The US military also has a new VR simulator called PARASIM, used to train paratroopers on particular combat jumps. Military and police implementations have the added value of training users of potentially life-threatening situations in cyberspace, keeping them safer and more prepared.

      The 3 big takeaways for TechRepublic readers

      1. A new VirTra VR simulator is in use by the South Australian Police Academy as a police officers in firearms training.
      2. The simulator doesn’t use live ammunition, saving the department money, lowering its environmental impact, and increasing safety.
      3. Training has quickly emerged as the most useful enterprise implementation for VR and related technologies, especially in dangerous professions like the military or police.

      These 10 US states have the highest rate of malware infections in the country


      Does where you live have anything to do with how likely you are to experience a malware attack? A new look at more than 1.5 million malware infections in the US compares rates of infection for the first six month of 2017—and found significant differences across the 50 states.

      Computer users in New Hampshire are most at risk—its rate of malware infections was 201% higher than the national average—according to the report, released by Enigma Software Group (ESG), who, it should be noted, make anti-malware programs.

      It’s not yet clear what kind of insight the geography sheds on malware rates, as the highest and lowest states, in terms of malware infection, include a mix of East coast, West coast, higher and lower-income areas, and densely and sparsely populated regions.

      Here are the 10 US states with the highest malware rates, and the % higher than the national average:

      1. New Hampshire (201%)
      2. Colorado (143%)
      3. Virginia (80%)
      4. New Jersey (64%)
      5. Oregon (25%)
      6. New York (24%)
      7. Montana (24%)
      8. Missouri (23%)
      9. Arizona (18%)
      10. Maine (17%)

      The report looks at several forms of malware, including adware, rogue, anti-spyware, ransomware—which, although highly damaging, only accounted for 1% of malicious programs in 2016, according to a recent report—and nuisance-ware, which is not as severe as other malware, but can impede workplace productivity by installing unwanted programs and slowing down computer speeds.

      Some general trends emerge from the report. The good news? Overall, malware infections have dropped every month since the beginning of the year, with infections in June 2017 31% lower than in January. ESG posits that the drop could be due to more secure updates of Windows being downloaded.

      ESG offers the following tips to keep computers protected from all kinds of malware, including backing up data (physically as well as through the cloud) on a regular basis, investing in a malware removal program that can automatically scan files, updating software regularly, and being cautious about opening links, especially via social media channels.

      SEE: Information Security Certification Training Bundle (TechRepublic Academy)

      “Regardless of where you live, it’s always important to stay vigilant for infections all the time,” said ESG spokesperson Ryan Gerding in a press release. And as TechRepublic’s Alison DeNisco has reported, small and medium businesses are particularly vulnerable to attacks, and lost more than $1 billion in 2016 due to ransomware alone.

      The 3 big takeaways for TechRepublic readers

      1. A new report by Enigma Software Group looked at more than 1.5 million malware infections in the US, and compared rates of infection for the first six month of 2016, finding significant differences across the 50 states.
      2. Computer users in New Hampshire are most at risk—its rate of malware infections was 201% higher than the national average.
      3. Overall malware infections have dropped every month since the beginning of the year, with infections in June 2017 31% lower than in January.

      Google’s updated feed taps machine learning to show new topics before you search for them

      Google has advanced its machine learning algorithm to provide a more personalized news feed to Google Search mobile app users, according to a Wednesday blog post from vice president of engineering Shashi Thakur.

      “While we each keep up to date on the things that matter to us in different ways—social media, news apps, talking to friends—it’s hard to find one place to stay in the know about exactly what matters to you,” Thakur wrote. “Today that’s changing.”

      Google first unveiled its app’s news feed in December, to help users stay on top of news stories that interest them and organize their personal information and schedule. This new update aims to make it “easier than ever to discover, explore and stay connected to what matters to you—even when you don’t have a query in mind,” Thakur wrote.

      In short, the Google Search app will suggest things for you to check out, before you even start typing.

      The search giant’s machine learning algorithm learns from a user’s search history as well as information from Google Maps data, Gmail, and YouTube searches to anticipate topics that are most important to the user, and will display stories or videos by topics on cards. For example, a user might see cards with information such as sports highlights, top news, videos, new music, or articles to read. The feed will also factor in stories that are trending in your local area, and around the world. “The more you use Google, the better your feed will be,” Thakur wrote.

      SEE: Google Analytics Mastery Course (TechRepublic Academy)

      The feed changes based on the user’s preferences. For example, Thakur wrote, if your hobby is photography, and you have a casual interest in fitness, your feed will reflect that. Users can also manually follow topics of interest right from Search by clicking a new Follow button next to search results such as movies, sports teams, musicians, or celebrities. It’s easy to unfollow a topic as well, Thakur wrote.

      Google’s feed also aims to offer broader context to the stories that appear by offering multiple viewpoints from a variety of sources, as well as other related articles, according to the post. “And when available, you’ll be able to fact check and see other relevant information to help get a more holistic understanding about the topics in your feed,” Thakur wrote. It’s also easy to search deeper into the topics of interest found in your feed, with one tap opening a Google search.

      These updates can make it easier to stay informed about the topics you care about, Thakur wrote. However, it’s possible that the feed could potentially be a distraction from the actual search you go to the Google app to complete.

      The feed is now available in the Google app for Android and iOS in the US, and will roll out internationally in the coming weeks. Businesses should explore how to better engage with their local communities to become a part of the feed, as it could be another method to potentially gain traffic via Google.

      screen-shot-2017-07-19-at-8-51-33-am.png

      The 3 big takeaways for TechRepublic readers

      1. Google’s updated news feed taps advanced machine learning capabilities to offer users a more personalized experience, according to a Wednesday blog post.

      2. The feed determines a user’s interests based on their search history and selected preferences, and shows them a variety of stories and videos on those topics.

      3. The updates are available in the Google app for Android and iOS in the US, and will roll out internationally soon.

      Could artificial intelligence disrupt the photography world?

      photoshop.jpg

      Scroll through some of the recent stories found on TechRepublic and you’ll see the topic of artificial intelligence (AI) mentioned on several occasions. AI isn’t something widely seen in action today, but the reality of its becoming more common is definitely on the lips and text editors of technologists. Can AI disrupt the world of photography? Will it eventually replace human input when it comes to processing photos? Anything is possible, but I truly doubt it.

      What’s happening with AI and photography?

      In a recent blog post, a team at Google shared how its deep learning technology has been able to produce “professional quality” photo editing for a batch of landscape photos. In blind testing, pro photographers rated up to 40% of the images edited by AI as semi-pro or pro level quality. Quite frankly, some of the images published were quite nice, but is this enough to disrupt the world of photography? I don’t think so. Disrupt the world of photography editing? Well it could be useful, but not disruptive. Allow me to explain.

      SEE: The practical applications of AI: 6 videos (TechRepublic)

      AI versus the photographer on set

      Let’s think of a scenario that a photographer may face. First there’s a scheduled photo shoot with a client. In general, the client will have ideas on what they’re looking for in the session and the photographer works closely with the client to meet those needs. We’ll just throw headshot sessions out the window and look more at product photography or photography based on a scene in our example. Now close your eyes, be the client, and think of an ad showing a boardroom setting. In any scenario, it’s up to the client and photographer to determine the mood and message it wants presented in that boardroom photo shoot.

      Is the message “Board meetings are serious and powerful”? Or is the message “Come together and collaborate”? Both messages can be answered from the same scene by making a few nuance changes with lighting, the models’ posture, facial expressions, and gestures, or even the props used within the scene. The client may not understand those concepts, but the photographer will. In this scenario, I can’t say AI will aid in getting the client’s message across. Right now, the AI used by Google isn’t based on compositing or replacing props in a scene. A boardroom with with a few bottles of water or cups of coffee does not give the same vibe as a boardroom with an open box of doughnuts and crumpled cans of energy drinks. AI isn’t ready to replace the analytical skills a photographer brings to the set of a photo shoot.

      AI versus the photographer in the editing studio

      In the editing process, the photographer and AI share the same data. If a client were to upload an image into an AI system, it could easily input specified parameters to assist in the editing process. Keywords and maybe even a brief description of what the client is looking for is handy data. The AI could analyze the keywords against the uploaded image, proceed with editing to fit the client’s needs, and display it within minutes or even SECONDS as a preview. The client could then approve the image and download it for use.

      But what if the client doesn’t approve?

      Speaking from experience, I’ve edited photos for clients who didn’t always agree with my post processing—especially when dealing with humans in the images. “Can you make my neck look slimmer?” “Can you remove that small mole that’s under my left eye?” Those are not outlandish requests and are pretty common because most people want aesthetically superior models in their photographs. On the other hand, some individuals have taken pride in or made a name for themselves around their imperfections. Think of the former NFL player, Michael Strahan. Strahan has a gap between his two front teeth. With the gazillions of dollars he’s earned as a professional football player, he could easily have gotten orthodontic care to correct the gap. He didn’t. How will AI photo editing handle such situations? Sure, the machine can learn to touch up skin blemishes or imperfections, but to what extent? Will the AI understand the context of the edit or the subject matter better than a human?

      SEE: Video: Why artificial intelligence will be humankind’s final invention (TechRepublic)

      When I hosted a Smartphone Photographers Community, we discussed how photos that tell a story are usually the photos that capture our emotions. It may not be the photo with the best exposure or color saturation, but when you see it, you stop to admire it. For example, one of the more iconic images of US history is the raising of the US flag at Iwo Jima. This image isn’t technically sound. The exposure isn’t quite right and the contrast could be increased. But at the end of the day, WHO CARES? It’s an awesome photo capturing an emotional moment. Who’s to say that running the image through post processing wouldn’t have ruined it?

      iwojima.jpgiwojima.jpg

      Conclusion

      I think it would be tough for AI to know when and where to draw the line when it comes to post processing photos. Some photos need human intervention in the editing process to understand the mood and message the photo is supposed to convey, not just the adjusting of exposure or white balance. If a photo is just a run-of-the-mill landscape photograph, there just may be a place for AI photo editing. But even with that said, I’d much rather lean on the professional skills of landscape photographers, such as Trey Ratcliff or Thomas Heaton, who have a way of tugging at your emotions with their photography.

      Enterprise spending on public cloud to hit $266B in 2021, driven by SaaS, cloud apps

      The United States is set to become the largest market for public cloud services, assuming over 60% of global public cloud services, according to a new report from IDC. In addition, total worldwide spending is projected to reach $266 billion by 2021, with $163 billion accounted for by the US, the report said.

      SEE: Learn Cloud Computing from Scratch (TechRepublic Academy)

      “Professional services, banking, and telecommunications are the three fastest growing industries worldwide over the forecast period,” Eileen Smith, program director of Customer Insights and Analysis, said in a press release.

      Software as a Service (SaaS), composed of system infrastructure software (SIS) and applications, will continue as the leading mode of cloud computing. SaaS encompasses nearly two thirds of all public cloud spending in 2017 and is projected to capture nearly 60% by 2021, the report stated.

      Over half of public cloud spending comes from big businesses with more than 1,000 employees, according to the report. Within such business, most of the public cloud services are channeled towards new projects in sectors like sales and customer service, Eileen Smith added.

      The report highlighted that over 60% of cloud applications spending will be dedicated to customer relationship management (CRM) and enterprise resource management (ERM). These applications are predicted to dominate SaaS spending over the next five years, setting the tone for the future of public cloud spending.

      TechRepublic’s Hope Reese noted that with the increase in cloud spending comes an increase in concern about cloud security. In fact, “Forrester predicts a spike in cloud security spending over the next five years,” Forrester analyst Jennifer Adams wrote in a blog post about cloud security spending, “monitoring data across multiple cloud platforms and products simultaneously is a challenge, as is cross-platform security.”

      IDC’s Worldwide Semiannual Public Cloud Services Spending Guide calculates public cloud purchases for 20 industries across 47 countries and five company sizes, the report said. The detailed spending guide was created for the purpose of helping IT professionals clearly interpret where funds going towards public cloud are being spent on an industry-specific level and where they will be spent over the next five years, the press release stated.

      The 3 big takeaways for TechRepublic readers

      1. According to a recent IDC report, total worldwide spending on cloud services will reach $266 billion by 2021.
      2. Software as a Service (SaaS) is the dominant mode of cloud computing and will continue to grow, consuming nearly 60% of cloud spending by 2021.
      3. The increase in cloud spending could present more opportunities to cyber hackers to attack, which means organizations will increase spending in cybersecurity to keep public cloud computing safe.

      Massive Amazon S3 breaches highlight blind spots in enterprise race to the cloud

      A recent data breach at Dow Jones exposed data including names, addresses, and partial credit card numbers from millions of customers, according to a Monday report from UpGuard. The reason for the leak? Dow Jones simply chose the wrong permission settings for the Amazon Web Services (AWS) S3 data repository.

      By configuring the settings the way it did, Dow Jones essential gave any AWS users access to the data. While this seems like an oversight that would be easily caught by an admin, common sense mistakes are rampant among large companies racing to get their data to the cloud.

      In July 2017, Verizon confirmed a leak of data from some 6 million customers, due to a leak brought on by a poorly chosen security setting on an S3 repository. Additionally, a GOP voter records leak of the personal data of almost 200 million Americans, called the “largest ever” of its kind, was also attributable to poor S3 security.

      SEE: Complete IT Cloud Security & Hacking Training (TechRepublic Academy)

      So, what gives? Is AWS to blame for all these breaches affecting S3? In short, the answer is no. In an effort to move quickly to the cloud, gaining the competitive advantages promised therein, many organizations overlook key steps in securing their cloud data.

      RSA senior director of advanced cyber defense Peter Tran said that cloud security is in a “delicate state of transition,” with a massive surge of cloud migrations happening in the past year. Additionally, the desire to move to the cloud as fast as possible has been driven by organizations looking to get away from aging legacy infrastructure and take advantage of cloud flexibility. And the sheer speed of the “cloud first” movement has led to security gaps, specifically regarding identity management and access controls, Tran said.

      “The ‘lumpiness’ in cloud security happens when business risks aren’t aligned to technology risks and there are blind spots in design, deployment, implementation, governance, policy and compliance….flying a plane with no windows or instruments….exposures and mistakes can happen,” Tran said.

      When it comes to data security in the enterprise, the margin for error is slim. But, it’s even smaller when it comes to cloud security, Tran noted. If you miss the mark even slightly, the results could be catastrophic.

      According to Rob Enns, vice president of engineering for Bracket Computing, the prevalence of the S3 breaches highlights the fact that organizations must own their cloud security—they cannot outsource it.

      “Enterprise security architectures must expand to include cloud services in addition to on-premise data centers,” Enns said. “To manage complexity in these new environments, consistency from on-premise to cloud (and across cloud service providers) and enabling IT to retain control of information security gives application architects and developers a base on which they can move fast while remaining compliant with the enterprise’s security requirements.”

      When considering a public cloud storage provider, Tran said, businesses should look at both the Service Letter Objective (SLO) and Service Letter Agreement (SLA) to determine what level of risk they’re willing to take on, as they address different issues. Sometimes, the risk is too much and it needs to be left on the table.

      The 3 big takeaways for TechRepublic readers

      1. Poor cloud security practices have lead to AWS S3 data leaks at Dow Jones, Verizon, and a GOP voter analytics firm, putting user data at risk.
      2. As companies race to the cloud, they are forgoing proper security practices and aren’t properly aligning the risks with the business needs.
      3. Companies need to own their cloud security, examine the SLA and SLO, and decide what they’re willing to take on in terms of issues and risk.

      IBM: 73% of CEOs predict AI, cognitive computing to play ‘key role’ in future of business

      According to new IBM research, released Thursday, more than 73% of CEOs predict that cognitive computing, or artificial intelligence (AI), technologies will play a “key role” in their business’s future. The press release announcing the findings also noted that more than 50% of these CEOs plan to adopt these technologies by the year 2019.

      In terms of where AI will prove most impactful, respondents listed information technology, sales, and information security as the top three priorities for the technology. The 6,000 CEOs surveyed also said they expect a 15% return on their AI investments.

      In regards to IT, cognitive solutions can improve software development and testing, lead to better efficiency and agility, and boost solution design the release said. Sales professionals get access to better data for lead management, and can be more efficient in their account management. Cognitive computing could also speed threat and fraud detection and free up employees, the release said.

      SEE: The Complete Machine Learning Bundle (Techrepublic Academy)

      IBM is known for its cognitive computing solution, Watson, which famously won the TV game show Jeopardy. While IBM has distanced itself from the term “AI” in its use of “cognitive computing,” the technologies accomplish many of the same goals.

      As part of the study, IBM also gave specific recommendations on how business could use AI or cognitive computing to power their digital transformation efforts. For starters, planning is essential. IBM recommends outlining an 18-to-24-month strategy for adopting these technologies.

      “Define enterprise or business unit reinvention case, KPIs, and targets. Apply a targeted operating model and governance that support this strategy,” the press release said.

      Next up is the ideation stage. Periodically, businesses should assess the market and how certain users are responding. Experiment with these technologies to develop best practices and common use cases, the report said. Be sure to tailor your standards to your own organization.

      After that, users should incubate and scale their ideas, as well as working on new prototypes to address specific business problems.

      “Design and execute pilots with agility and limited risk to existing customers and operations. When these concepts are incubated, commercialized, and scaled, use a lean governance model to periodically review progress and value. Monitor the business case value realization and make adjustments, as necessary,” the release said.

      The 3 big takeaways for TechRepublic readers

      1. A new IBM research report claims that 73% of CEOs predict cognitive computing will play a “key role” in the future of their business.
      2. The report also said that more than 50% of CEOs plan to adopt the technology by 2019, and they expect a 15% return on their investment.
      3. IBM recommends outlining a plan for adoption of these technologies, before ideating, incubating, and scaling your solutions.