Skip to content
This repository was archived by the owner on Mar 9, 2022. It is now read-only.

Conversation

@mikebrow
Copy link
Member

Adds/Enables AppArmor

Requires, the following commit or similar fix before AppArmor profile enabled containers will be "allowed" to be created through Kublet:

kubernetes/kubernetes@747933e

Signed-off-by: Mike Brown [email protected]

@mikebrow mikebrow changed the title Adds support for AppArmor <Do no merge> Adds support for AppArmor <Do not merge> Aug 23, 2017
@Random-Liu Random-Liu self-assigned this Aug 23, 2017
@Random-Liu
Copy link
Member

I'll try to get the dependency merged as soon as possible.

@mikebrow
Copy link
Member Author

Added apparmor dependency notes to the readme, install in travis, and buildtags for our runc build in the e2e test.

@Random-Liu
Copy link
Member

Will take a look as soon as kubernetes/kubernetes#51167 is done.

@mikebrow
Copy link
Member Author

Noticed an issue with my rebase.. new code was added to filter security options when securityContext.GetPrivileged() was true. The simple rebase/merge missed that and so I was adding apparmor profile when privileged or not. Fixed.

git fetch --all
git checkout ${RUNC_VERSION}
make
make BUILDTAGS='seccomp apparmor'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make apparmor optional via an environment variable. Let's not build it by default, and only enable it in our travis.

I don't think people using selinux need apparmor built in. :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah was going to add the selinux option as well.

We should probably discuss this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per discussion I'll add support for users overriding our default BUILDTAGS

// TODO(random-liu): [P2] Add seccomp.

// Set apparmor profile name, Note: trims default profile name prefix
appArmorProf := securityContext.GetApparmorProfile()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto#L512-L517

There are 3 possibilities:

  1. runtime/default
  2. localhost/<profile_name>
  3. empty

We should also handle 1). I'm fine with making the default as empty for now, but we should add a TODO to make the default apparmor profile configurable via a flag or config file. And if the profile is runtime/default, you may not want to write it into the spec.

Copy link
Member Author

@mikebrow mikebrow Aug 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah was interested to see what docker/default and/or runtime/default means in this context. Are we expected to define it or kubelet/cri? Wasn't sure just how much filtering we should do and how much to let pass through. But yes at a minimum a TODO is needed here. I was exploring how best to handle such default requests to a higher degree on the seccomp stuff.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For runtime/default, we should have a default profile. For now, it could be empty.
But a TODO is needed here to make the default profile configurable from flag.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good.

@Random-Liu
Copy link
Member

@mikebrow Not all people want apparmor. We should also have a flag to disable it, like cri-o and docker.

@mikebrow
Copy link
Member Author

Maybe... we should discuss. For example, if they didn't want apparmor why was it used? Do we propose all CRI implementors act as filters of kublet requests? Or are we pass through? Do we suggest customers run two instances of cri-containerd one with apparmor enabled on a first sock and another with it off on another socket?

Dunno I'd at least like to consider the options before we go the path of CRIO which seems more suited to a standalone client product than a container enabling interface.

@mikebrow
Copy link
Member Author

mikebrow commented Aug 24, 2017

per discussion we'll see if we can determine if apparmor is supported by runc and if we can't we'll add a TODO for either a cri-containerd switch or go figure out how to add support in containerd/runc to figure out if it's supported. Easy enough to tell if it's supported by the platform.. not so easy to see if it's supported in runc without some PR work.

Going with a TODO for now

@Random-Liu
Copy link
Member

@mikebrow
Copy link
Member Author

Waiting for an update to apimachinery https://github.com/kubernetes/apimachinery/issues/22

@crosbymichael
Copy link
Member

Hey @mikebrow

I just merged some changes in containerd to help support apparmor and generate profiles as well based on the default that we have been using.

containerd/containerd#1438

@mikebrow
Copy link
Member Author

Thx @crosbymichael I'll work that in.

@crosbymichael
Copy link
Member

@mikebrow hope it helps

@Random-Liu
Copy link
Member

Will take another look today.

@mikebrow
Copy link
Member Author

rebased .. testing

@crosbymichael
Copy link
Member

@mikebrow nice and simple

code LGTM

// TODO (mikebrow): delete created apparmor default profile
specOpts = append(specOpts, apparmor.WithDefaultProfile(AppArmorDefaultProfileName))
}
if appArmorProf != "" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that both WithDefaultProfile and WithProfile will be called if the profile is RuntimeDefault.

Probably to use a switch to make it clear.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah switch or else.. my bad

"github.com/kubernetes-incubator/cri-containerd/pkg/util"
)

// ProfileNamePrefix is the prefix for loading profiles on a localhost. Eg. AppArmor localhost/profileName.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add comment for these constants, and move this comment to be in front of ProfileNamePrefix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And please make them private, don't think we want to expose them. :)

ProfileNamePrefix = "localhost/"
RuntimeDefault = "runtime/default"
AppArmorDefaultProfileName = "cri-containerd.apparmor.d"
AppArmorEnabled = true // TODO (mikebrow): make these apparmor defaults configurable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's always apply profile, if kubelet gives us an non-empty one, which is what we discussed, right?

Copy link
Member Author

@mikebrow mikebrow Aug 31, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and I think we also wanted a switch to override.. but yea the default is on and we don't have a way to turn it off yet so if provided and it's non empty and until we implement a switch to override ...

// Trim default profile name prefix
specOpts = append(specOpts, apparmor.WithProfile(strings.TrimPrefix(appArmorProf, ProfileNamePrefix)))
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior when profile is not specified is not clearly defined. I filed a Kubernetes issue for this: kubernetes/kubernetes#51746

I'm fine with not setting default profile for now, until it's clearly defined. Add TODO for it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kk

)

// ProfileNamePrefix is the prefix for loading profiles on a localhost. Eg. AppArmor localhost/profileName.
const (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should get these constants from kubernetes/kubernetes#51747.
Add TODO for it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the todo is 4 lines down...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean we should get the first 2 constants from kubernetes package, which we haven't done yet.

default:
// Trim default profile name prefix
specOpts = append(specOpts, apparmor.WithProfile(strings.TrimPrefix(appArmorProf, ProfileNamePrefix)))
specOpts = append(specOpts, apparmor.WithProfile(strings.TrimPrefix(appArmorProf, profileNamePrefix)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if !strings.HasPrefix(appArmorProf, profileNamePrefix) {
  return nil, fmt.Errorf("invalid apparmor profile %q", appArmorProf)
}
specOpts = append(specOpts, ...)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems overly restrictive but ok..

if appArmorProf != "" {
specOpts = append(specOpts, apparmor.WithDefaultProfile(appArmorDefaultProfileName))
case "":
// TODO (mikebrow): clarify what we should do in the case where no apparmor profile was specified see kubernetes/kubernetes#51746
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is way too long. :P

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the proper length 80 chars? :-)

ProfileNamePrefix = "localhost/"
RuntimeDefault = "runtime/default"
AppArmorDefaultProfileName = "cri-containerd.apparmor.d"
AppArmorEnabled = true // TODO (mikebrow): make these apparmor defaults configurable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add TODO:

TODO(mikebrow): Get the following 2 constant strings from CRI kubernetes/kubernetes#51747.

@Random-Liu
Copy link
Member

Random-Liu commented Sep 1, 2017

LGTM with final nits. After comments addressed, and run node e2e apparmor test, I think we are good to go.

@mikebrow mikebrow changed the title Adds support for AppArmor <Do not merge> Adds support for AppArmor Sep 1, 2017
@mikebrow
Copy link
Member Author

mikebrow commented Sep 1, 2017

comments addressed and e2e node passes after running sudo swapoff -va

mike@mike-VirtualBox:~/go/src/k8s.io/kubernetes$ sudo make test-e2e-node FOCUS=AppArmor RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT=/var/run/cri-containerd.sock
[sudo] password for mike: 
+++ [0901 06:38:48] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0901 06:38:48] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/go/src/k8s.io/kubernetes ~/go/src/k8s.io/kubernetes/test/e2e/generated
~/go/src/k8s.io/kubernetes/test/e2e/generated
+++ [0901 06:38:49] Building go targets for linux/amd64:
    vendor/github.com/onsi/ginkgo/ginkgo
Creating artifacts directory at /tmp/_artifacts/170901T063907
Test artifacts will be written to /tmp/_artifacts/170901T063907
Updating sudo credentials
+++ [0901 06:39:10] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0901 06:39:10] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/go/src/k8s.io/kubernetes ~/go/src/k8s.io/kubernetes/test/e2e/generated
~/go/src/k8s.io/kubernetes/test/e2e/generated
+++ [0901 06:39:11] Building go targets for linux/amd64:
    cmd/kubelet
    test/e2e_node/e2e_node.test
    vendor/github.com/onsi/ginkgo/ginkgo
Running Suite: E2eNode Suite
============================
Random Seed: 1504265967 - Will randomize all specs
Will run 248 specs

Running in parallel across 8 nodes

2017/09/01 06:39:28 proto: duplicate proto type registered: google.protobuf.Any
2017/09/01 06:39:28 proto: duplicate proto type registered: google.protobuf.Duration
2017/09/01 06:39:28 proto: duplicate proto type registered: google.protobuf.Timestamp
I0901 06:39:28.618051     668 validators.go:44] Validating os...
OS: Linux
I0901 06:39:28.619845     668 validators.go:44] Validating kernel...
I0901 06:39:28.620971     668 kernel_validator.go:77] Validating kernel version
KERNEL_VERSION: 4.4.0-93-generic
I0901 06:39:28.621072     668 kernel_validator.go:92] Validating kernel config
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: enabled (as module)
CONFIG_BLK_DEV_DM: enabled
I0901 06:39:28.628032     668 validators.go:44] Validating cgroups...
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
I0901 06:39:28.628092     668 validators.go:44] Validating package...
PASS
I0901 06:39:28.629720     593 e2e_node_suite_test.go:146] Pre-pulling images so that they are cached for the tests.
I0901 06:39:28.629734     593 remote_image.go:40] Connecting to image service /var/run/cri-containerd.sock
W0901 06:39:28.629747     593 util_unix.go:75] Using "/var/run/cri-containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/cri-containerd.sock".
I0901 06:39:28.630054     593 image_list.go:137] Pre-pulling images with CRI [gcr.io/google-containers/stress:v1 gcr.io/google_containers/busybox:1.24 gcr.io/google_containers/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff gcr.io/google_containers/e2e-net-amd64:1.0 gcr.io/google_containers/eptest:0.1 gcr.io/google_containers/hostexec:1.2 gcr.io/google_containers/liveness:e2e gcr.io/google_containers/mounttest-user:0.5 gcr.io/google_containers/mounttest:0.8 gcr.io/google_containers/netexec:1.4 gcr.io/google_containers/netexec:1.5 gcr.io/google_containers/netexec:1.7 gcr.io/google_containers/nginx-slim:0.7 gcr.io/google_containers/node-problem-detector:v0.4.1 gcr.io/google_containers/nonewprivs:1.2 gcr.io/google_containers/pause-amd64:3.0 gcr.io/google_containers/serve_hostname:v1.4 gcr.io/google_containers/test-webserver:e2e gcr.io/google_containers/volume-gluster:0.2 gcr.io/google_containers/volume-nfs:0.8 google/cadvisor:latest]
I0901 06:40:05.493790     593 kubelet.go:96] Starting kubelet
I0901 06:40:05.493842     593 feature_gate.go:150] feature gates: map[LocalStorageCapacityIsolation:true]
I0901 06:40:05.494161     593 server.go:147] Starting server "kubelet" with command "/usr/bin/systemd-run --unit=kubelet-901832266.service --slice=runtime.slice --remain-after-exit /home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/kubelet --kubelet-cgroups=/kubelet.slice --cgroup-root=/ --kubeconfig /home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/kubeconfig --address 0.0.0.0 --port 10250 --read-only-port 10255 --root-dir /var/lib/kubelet --volume-stats-agg-period 10s --allow-privileged true --serialize-image-pulls false --pod-manifest-path /home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/pod-manifest041889886 --file-check-frequency 10s --pod-cidr 10.100.0.0/24 --eviction-pressure-transition-period 30s --feature-gates  --eviction-hard memory.available<250Mi,nodefs.available<10%%,nodefs.inodesFree<5%% --eviction-minimum-reclaim nodefs.available=5%%,nodefs.inodesFree=5%% --v 4 --logtostderr --network-plugin=kubenet --cni-bin-dir /home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/cni/bin --cni-conf-dir /home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/cni/net.d --hostname-override mike-VirtualBox --container-runtime-endpoint=/var/run/cri-containerd.sock --container-runtime=remote --network-plugin= --cni-bin-dir="
I0901 06:40:05.494215     593 server.go:102] Running readiness check for service "kubelet"
I0901 06:40:05.494266     593 server.go:175] Output file for server "kubelet": /tmp/_artifacts/170901T063907/kubelet.log
I0901 06:40:05.495434     593 server.go:217] Running health check for service "kubelet"
I0901 06:40:05.495451     593 server.go:102] Running readiness check for service "kubelet"
I0901 06:40:06.497485     593 server.go:147] Starting server "services" with command "/home/mike/go/src/k8s.io/kubernetes/_output/local/go/bin/e2e_node.test --run-services-mode --test.timeout=24h --ginkgo.seed=1504265967 --ginkgo.focus=AppArmor --ginkgo.skip=\\[Flaky\\]|\\[Slow\\]|\\[Serial\\] --ginkgo.parallel.node=1 --ginkgo.parallel.total=8 --ginkgo.parallel.streamhost=http://127.0.0.1:40101 --ginkgo.parallel.synchost=http://127.0.0.1:40101 --ginkgo.slowSpecThreshold=5.00000 --container-runtime=remote --container-runtime-endpoint=/var/run/cri-containerd.sock --image-service-endpoint= --alsologtostderr --v 4 --report-dir=/tmp/_artifacts/170901T063907 --node-name mike-VirtualBox --kubelet-flags=--container-runtime-endpoint=/var/run/cri-containerd.sock --kubelet-flags=--container-runtime=remote --kubelet-flags=--network-plugin= --cni-bin-dir="
I0901 06:40:06.497515     593 server.go:102] Running readiness check for service "services"
I0901 06:40:06.497552     593 server.go:175] Output file for server "services": /tmp/_artifacts/170901T063907/services.log
I0901 06:40:06.498053     593 server.go:228] Initial health check passed for service "kubelet"
I0901 06:40:06.498408     593 server.go:206] Waiting for server "services" start command to complete
I0901 06:40:10.504528     593 e2e_node_suite_test.go:161] Node services started.  Running tests...
I0901 06:40:10.504547     593 e2e_node_suite_test.go:166] Wait for the node to be ready

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] GKE system requirements [Conformance] [Feature:GKEEnv]
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/gke_environment_test.go:303
Sep  1 06:40:12.722: INFO: Skipped because system spec name "" is not in [gke]


S [SKIPPING] in Spec Setup (BeforeEach) [0.000 seconds]
[k8s.io] GKE system requirements [Conformance] [Feature:GKEEnv]
/home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:672
  The docker daemon should support AppArmor and seccomp [BeforeEach]
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/gke_environment_test.go:338

  Sep  1 06:40:12.722: Skipped because system spec name "" is not in [gke]

  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:305
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] AppArmor [Feature:AppArmor]
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:48
STEP: Loading AppArmor profiles for testing
I0901 06:40:12.972396     622 apparmor_test.go:140] Loaded profiles: []
[BeforeEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
STEP: Building a namespace api object
Sep  1 06:40:12.980: INFO: Skipping waiting for service account
[It] should enforce a permissive profile
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:76
[AfterEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Sep  1 06:40:31.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-apparmor-test-s6llg" for this suite.
Sep  1 06:40:45.037: INFO: namespace: e2e-tests-apparmor-test-s6llg, resource: bindings, ignored listing per whitelist
Sep  1 06:40:45.042: INFO: namespace e2e-tests-apparmor-test-s6llg deletion completed in 14.040430439s


• [SLOW TEST:32.315 seconds]
[k8s.io] AppArmor [Feature:AppArmor]
/home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:672
  when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:77
    should enforce a permissive profile
    /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:76
------------------------------
[BeforeEach] [k8s.io] AppArmor [Feature:AppArmor]
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:48
STEP: Loading AppArmor profiles for testing
I0901 06:40:12.976237     623 apparmor_test.go:140] Loaded profiles: []
[BeforeEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
STEP: Building a namespace api object
Sep  1 06:40:12.992: INFO: Skipping waiting for service account
[It] should enforce a profile blocking writes
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:66
[AfterEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Sep  1 06:40:31.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-apparmor-test-bzbwt" for this suite.
Sep  1 06:40:45.042: INFO: namespace: e2e-tests-apparmor-test-bzbwt, resource: bindings, ignored listing per whitelist
Sep  1 06:40:45.064: INFO: namespace e2e-tests-apparmor-test-bzbwt deletion completed in 14.040943157s


• [SLOW TEST:32.405 seconds]
[k8s.io] AppArmor [Feature:AppArmor]
/home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:672
  when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:77
    should enforce a profile blocking writes
    /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:66
------------------------------
[BeforeEach] [k8s.io] AppArmor [Feature:AppArmor]
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:48
STEP: Loading AppArmor profiles for testing
I0901 06:40:12.964233     613 apparmor_test.go:140] Loaded profiles: []
[BeforeEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
STEP: Building a namespace api object
Sep  1 06:40:12.967: INFO: Skipping waiting for service account
[It] should reject an unloaded profile
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:55
[AfterEach] when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Sep  1 06:40:27.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-apparmor-test-bgwzp" for this suite.
Sep  1 06:40:55.068: INFO: namespace: e2e-tests-apparmor-test-bgwzp, resource: bindings, ignored listing per whitelist
Sep  1 06:40:55.082: INFO: namespace e2e-tests-apparmor-test-bgwzp deletion completed in 28.041675066s


• [SLOW TEST:42.460 seconds]
[k8s.io] AppArmor [Feature:AppArmor]
/home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:672
  when running with AppArmor
  /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:77
    should reject an unloaded profile
    /home/mike/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:55
------------------------------
I0901 06:40:55.116551     593 e2e_node_suite_test.go:182] Stopping node services...
I0901 06:40:55.116593     593 server.go:303] Kill server "services"
I0901 06:40:55.116604     593 server.go:340] Killing process 748 (services) with -TERM
E0901 06:40:55.149333     593 services.go:99] Failed to stop services: error stopping "services": waitid: no child processes
I0901 06:40:55.149495     593 server.go:303] Kill server "kubelet"
I0901 06:40:55.223225     593 services.go:156] Fetching log files...
I0901 06:40:55.223337     593 services.go:165] Get log file "kern.log" with journalctl command [-k].
I0901 06:40:55.230584     593 services.go:165] Get log file "docker.log" with journalctl command [-u docker].
I0901 06:40:55.232791     593 services.go:165] Get log file "cloud-init.log" with journalctl command [-u cloud*].
E0901 06:40:55.234496     593 services.go:168] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
, exit status 1
I0901 06:40:55.234521     593 services.go:165] Get log file "kubelet.log" with journalctl command [-u kubelet-901832266.service].
I0901 06:40:55.325454     593 e2e_node_suite_test.go:187] Tests Finished


Ran 3 of 248 Specs in 87.066 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 245 Skipped 

Ginkgo ran 1 suite in 1m27.791098582s
Test Suite Passed

@mikebrow
Copy link
Member Author

mikebrow commented Sep 1, 2017

Kube staging has been pushed, so I went ahead and moved from STTS branches back to the original sources.

@Random-Liu
Copy link
Member

@mikebrow LGTM. Could you cleanup your commits a bit? And then good to go.

@Random-Liu Random-Liu added the lgtm label Sep 1, 2017
@mikebrow
Copy link
Member Author

mikebrow commented Sep 1, 2017

ok commits are squashed..

@Random-Liu
Copy link
Member

Will merge after test passes.

@Random-Liu
Copy link
Member

Just about to merge, and found a conflict...Please rebase.

@Random-Liu
Copy link
Member

This will be the next one to merge after test passes.

@Random-Liu Random-Liu merged commit f7fd736 into containerd:master Sep 1, 2017
lanchongyizu pushed a commit to lanchongyizu/cri-containerd that referenced this pull request Sep 3, 2017
@Random-Liu Random-Liu mentioned this pull request Sep 6, 2017
42 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants