You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- END MUNGE: EXAMPLE ../../examples/blog-logging/counter-pod.yaml -->
78
78
79
79
This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let’s create the pod in the default
<!-- END MUNGE: EXAMPLE ../../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml -->
196
196
197
197
This pod specification maps the directory on the host containing the Docker log files, `/var/lib/docker/containers`, to a directory inside the container which has the same path. The pod runs one image, `gcr.io/google_containers/fluentd-gcp:1.6`, which is configured to collect the Docker log files from the logs directory and ingest them into Google Cloud Logging. One instance of this pod runs on each node of the cluster. Kubernetes will notice if this pod fails and automatically restart it.
Copy file name to clipboardexpand all lines: examples/cassandra/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -101,7 +101,7 @@ spec:
101
101
```
102
102
103
103
[Download example](cassandra-controller.yaml)
104
-
<!-- END MUNGE: EXAMPLE -->
104
+
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
105
105
106
106
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
107
107
@@ -132,7 +132,7 @@ spec:
132
132
```
133
133
134
134
[Download example](cassandra-service.yaml)
135
-
<!-- END MUNGE: EXAMPLE -->
135
+
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
136
136
137
137
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
138
138
@@ -242,7 +242,7 @@ spec:
242
242
```
243
243
244
244
[Download example](cassandra-controller.yaml)
245
-
<!-- END MUNGE: EXAMPLE -->
245
+
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
246
246
247
247
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Copy file name to clipboardexpand all lines: examples/celery-rabbitmq/README.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -82,7 +82,7 @@ spec:
82
82
```
83
83
84
84
[Download example](rabbitmq-service.yaml)
85
-
<!-- END MUNGE: EXAMPLE -->
85
+
<!-- END MUNGE: EXAMPLE rabbitmq-service.yaml -->
86
86
87
87
To start the service, run:
88
88
@@ -127,7 +127,7 @@ spec:
127
127
```
128
128
129
129
[Download example](rabbitmq-controller.yaml)
130
-
<!-- END MUNGE: EXAMPLE -->
130
+
<!-- END MUNGE: EXAMPLE rabbitmq-controller.yaml -->
131
131
132
132
Running `$ kubectl create -f examples/celery-rabbitmq/rabbitmq-controller.yaml` brings up a replication controller that ensures one pod exists which is running a RabbitMQ instance.
133
133
@@ -168,7 +168,7 @@ spec:
168
168
```
169
169
170
170
[Download example](celery-controller.yaml)
171
-
<!-- END MUNGE: EXAMPLE -->
171
+
<!-- END MUNGE: EXAMPLE celery-controller.yaml -->
172
172
173
173
There are several things to point out here...
174
174
@@ -239,7 +239,7 @@ spec:
239
239
```
240
240
241
241
[Download example](flower-service.yaml)
242
-
<!-- END MUNGE: EXAMPLE -->
242
+
<!-- END MUNGE: EXAMPLE flower-service.yaml -->
243
243
244
244
It is marked as external (LoadBalanced). However on many platforms you will have to add an explicit firewall rule to open port 5555.
245
245
On GCE this can be done with:
@@ -280,7 +280,7 @@ spec:
280
280
```
281
281
282
282
[Download example](flower-controller.yaml)
283
-
<!-- END MUNGE: EXAMPLE -->
283
+
<!-- END MUNGE: EXAMPLE flower-controller.yaml -->
284
284
285
285
This will bring up a new pod with Flower installed and port 5555 (Flower's default port) exposed through the service endpoint. This image uses the following command to start Flower:
0 commit comments