Serdar Yegulalp

About the Author Serdar Yegulalp


Docker will include Kubernetes in the box

Docker announced today it will integrate an “unmodified” version of Google’s Kubernetes container-orchestration tool as a native part of Docker. Docker said the Kubernetes integration will be available as a beta release, but gave no release date.

This integration will be extended to all versions of Docker—the for-pay Enterprise Edition, and the desktop incarnations, Docker for Mac and Docker for Windows, which use the free Community Edition. Both enterprise and desktop versions will have Kubernetes support for all the operating systems they currently support.

Why Docker is adding Kubernetes

One reason Docker is including Kubernetes is to spare developers the effort of standing up a Kubernetes instance, whether for simple dev/test or for actual production use. Historically it’s been a chore to get Kubernetes running, and so a slew of Kubernetes tools and third-party Kubernetes projects have emerged to simplify the process. Most of the time, it’s easier to use a Kubernetes distribution, becayse the distribution’s packaging deals with these problems at a high level.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Gluon brings AI developers self-tuning machine learning

Read more 0 Comments

What is Grafeas? Better auditing for containers

The software we run has never been more difficult to vouchsafe than it is today. It is scattered between local deployments and cloud services, built with open source components that aren’t always a known quantity, and delivered on a fast-moving schedule, making it a challenge to guarantee safety or quality.

The end result is software that is hard to audit, reason about, secure, and manage. It is difficult not just to know what a VM or container was built with, but what has been added or removed or changed and by whom. Grafeas, originally devised by Google, is intended to make these questions easier to answer.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

What’s new in Kubernetes 1.8: role-based access, for starters

The latest version of the open source container orchestration framework Kubernetes, Kubernetes 1.8, promotes some long-gestating, long-awaited features to beta or even full production release. And it adds more alpha and beta features as well.

The new additions and promotions:

  • Role-based security features.
  • Expanded auditing and logging functions.
  • New and improved ways to run both interactive and batch workloads.
  • Many new alpha-level features, designed to become full-blown additions over the next couple of releases.

Kubernetes 1.8’s new security features

Earlier versions of Kubernetes introduced role-based access control (RBAC) as a beta feature. RBAC lets an admin define access permissions to Kubernetes resources, such as pods or secrets, and then grant (“bind”) them to one or more users. Permissions can be for changing things (“create”, “update”, “patch”) or just obtaining information about them (“get”, “list”, “watch”). Roles can be applied on a single namespace or across an entire cluster, via two distinct APIs.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

What’s new in MySQL 8.0

MySQL, the popular open-source database that’s a standard element in many web application stacks, has unveiled the first release candidate for version 8.0.

Features to be rolled out in MySQL 8.0 include:

  • First-class support for Unicode 9.0 out of the box.
  • Window functions and recursive SQL syntax, for queries that previously weren’t possible or would have been difficult to write.
  • Expanded support for native JSON data and document-store functionality.

With version 8.0, MySQL is jumping several versions in its numbering (from 5.5), due to 6.0 being nixed and 7.0 being reserved for the clustering version of MySQL.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Cython 0.27 speeds Python by moving away from oddball syntax

Cython, the toolkit that allows Python code to be converted to high-speed C code, has a new 0.27 release that can now use Python’s own native typing syntax to speed up the Python-to-C conversion process.

Previously, Cython users could accelerate Python only by decorating the code with type annotations in a dialect peculiar to Cython. Python has its own optional syntax for variable type annotation, but Cython didn’t use it.

With Cython 0.27, Cython can now recognize PEP 526-style type declarations for native Python types, such as str or list. The same syntax can also be used to explicitly define native C types, using declarations like declaration like var: cython.int = 32.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

ONNX makes machine learning models portable, shareable

Read more 0 Comments

ONNX makes machine learning models portable, shareable

Read more 0 Comments

Mesosphere taps Kubernetes for container orchestration

Read more 0 Comments

Mesosphere DC/OS taps Kubernetes for container orchestration

Read more 0 Comments

Microsoft linker tool shrinks .Net applications

A long-requested and long-unfulfilled feature for .Net has finally been delivered by Microsoft and the Mono team: A linker that allows .Net applications to be stripped down to include only the parts of libraries that are actually used by the program at runtime.

The IL Linker project works by analyzing a .Net application and determining which libraries are never called by the application in question. “It is effectively an application-specific dead code analysis,” says Microsoft in its GitHub announcement for the project.

A long-term mission for IL Linker is to make it into “the primary linker for the .Net ecosystem.”

To read this article in full or to leave a comment, please click here

Read more 0 Comments

3 projects lighting a fire under machine learning

Mention machine learning, and many common frameworks pop into mind, from “old” stalwarts like Scikit-learn to juggernauts like Google’s TensorFlow. But the field is large and diverse, and useful innovations are bubbling up across the landscape.

Recent releases from three open source projects continue the march toward making machine learning faster, more scalable, and easier to use. PyTorch and Apache MXNet bring GPU support to machine learning and deep learning in Python. 

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Pivotal, VMware team up to deploy Kubernetes on vSphere

Pivotal and VMware have teamed up to deliver commercial-grade Kubernetes distributions on both VMware vSphere and Google Cloud Platform (GCP).

Pivotal Container Service (PKS), launching in Q4 2017, runs Kubernetes atop VMware’s infrastructure management tools—vSphere, vSAN, and NSX. It also taps a project from Cloud Foundry, Kubo, originally created by Pivotal and Google, to deploy and manage Kubernetes on VMware’s stack.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Microsoft’s Project Brainwave accelerates deep learning in Azure

Earlier this year, Google unveiled its Tensor Processing Unit, custom hardware for speeding up prediction-making with machine learning models.

Now Microsoft is trying something similar, with its Project Brainwave hardware, which supports many major deep learning systems in wide use. Project Brainwave covers many of the same goals as Google’s TPU: Speed up how predictions are served from machine learning models (in Brainwave case, those hosted in Azure, using custom hardware deployed in Microsoft’s cloud at scale). 

To read this article in full or to leave a comment, please click here

Read more 0 Comments

13 frameworks for mastering machine learning

13 frameworks for mastering machine learning
13 frameworks for mastering machine learning

Image by W.Rebel via Wikimedia

Over the past year, machine learning has gone mainstream with a bang. The “sudden” arrival of machine learning isn’t fueled by cheap cloud environments and ever more powerful GPU hardware alone. It is also due to an explosion of open source frameworks designed to abstract away the hardest parts of machine learning and make its techniques available to a broad class of developers.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Docker Enterprise now runs Windows and Linux in one cluster

With the newest Docker Enterprise Edition, you can now have Docker clusters composed of nodes running different operating systems.

Three of the key OSes supported by Docker—Windows, Linux, and IBM System Z—can run applications side by side in the same cluster, all orchestrated by a common mechanism.

Clustering apps across multiple OSes in Docker requires that you build per-OS images for each app. But those apps, when running on both Windows and Linux, can be linked to run in concert via Docker’s overlay networking.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Docker Enterprise now runs Windows and Linux in one cluster

With the newest Docker Enterprise Edition, you can now have Docker clusters composed of nodes running different operating systems.

Three of the key OSes supported by Docker—Windows, Linux, and IBM System Z—can run applications side by side in the same cluster, all orchestrated by a common mechanism.

Clustering apps across multiple OSes in Docker requires that you build per-OS images for each app. But those apps, when running on both Windows and Linux, can be linked to run in concert via Docker’s overlay networking.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Docker Enterprise now runs Windows and Linux in one cluster

With the newest Docker Enterprise Edition, you can now have Docker clusters composed of nodes running different operating systems.

Three of the key OSes supported by Docker — Windows, Linux, and IBM System Z — can run applications side by side in the same cluster, all orchestrated by a common mechanism.

Clustering apps across multiple OSes in Docker requires that you build per-OS images for each app. But those apps, when running on both Windows and Linux, can be linked to run in concert via Docker’s overlay networking.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Amazon joins Kubernetes-focused CNCF industry group

The Cloud Native Computing Foundation, created to promote and develop technologies like Kubernetes and core components of the container ecosystem spawned by Docker, welcomed Amazon Web Services into its fold this week.

Amazon comes on board as a top-level (“platinum”) member. According to Amazon’s Adrian Cockcroft, now a member of the CNCF’s governing board, containers are the big reason Amazon’s getting involved—at least, initially.

Amazon already has a major investment in container tech. Its ECS service provides managed containers that run via machine images deployed on clusters of EC2 instances. Its older Elastic Beanstalk service can deploy and manage Docker containers, although they’re scaled and managed via Amazon’s own internal stack, not the CNCF’s Kubernetes. And users can always manually deploy Docker Enterprise Edition, a container-centric Linux such as CoreOS, or a Kubernetes cluster on EC2.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

IBM speeds deep learning by using multiple servers

For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers — not just individual GPUs, but whole systems with their own separate sets of GPUs.

Now the bad news: It’s available only in IBM’s PowerAI 4.0 software package, which runs exclusively on IBM’s own OpenPower hardware systems.

Distributed Deep Learning (DDL) doesn’t require developers to learn an entirely new deep learning framework. It repackages several common frameworks for machine learning: TensorFlow, Torch, Caffe, Chainer, and Theano. Deep learning projecs that use those frameworks can then run in parallel across multiple hardware nodes.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Apache Spark 2.2 gets streaming, R language boosts

With version 2.2 of Apache Spark, a long-awaited feature for the multipurpose in-memory data processing framework is now available for production use.

Structured Streaming, as that feature is called, allows Spark to process streams of data in ways that are native to Spark’s batch-based data-handling metaphors. It’s part of Spark’s long-term push to become, if not all things to all people in data science, then at least the best thing for most of them.

To read this article in full or to leave a comment, please click here

Read more 0 Comments

Apache Spark 2.2 gets streaming, R language boosts

With version 2.2 of Apache Spark, a long-awaited feature for the multipurpose in-memory data processing framework is now available for production use.

Structured Streaming, as that feature is called, allows Spark to process streams of data in ways that are native to Spark’s batch-based data-handling metaphors. It’s part of Spark’s long-term push to become, if not all things to all people in data science, then at least the best thing for most of them.

To read this article in full or to leave a comment, please click here

Read more 0 Comments