Skip to content

Commit 76d2d70

Browse files
vbotbuildovichgithub-actions[bot]
authored andcommitted
auto-docs: Update K8s acceptance tests
1 parent 328b1f2 commit 76d2d70

File tree

3 files changed

+315
-2
lines changed

3 files changed

+315
-2
lines changed
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
@operator:none
2+
Feature: Upgrading the operator with Console installed
3+
@skip:gke @skip:aks @skip:eks
4+
Scenario: Console v2 to v3 no warnings
5+
Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values:
6+
"""
7+
"""
8+
And I apply Kubernetes manifest:
9+
"""
10+
---
11+
apiVersion: cluster.redpanda.com/v1alpha2
12+
kind: Redpanda
13+
metadata:
14+
name: operator-console-upgrade
15+
spec:
16+
clusterSpec:
17+
console:
18+
# Old versions have broken chart rendering for the console stanza
19+
# unless nameOverride is set due to mapping configmap values for
20+
# both the console deployment and redpanda statefulset to the same
21+
# name. Setting nameOverride to "broken" works around this.
22+
nameOverride: broken
23+
tls:
24+
enabled: false
25+
external:
26+
enabled: false
27+
statefulset:
28+
replicas: 1
29+
sideCars:
30+
image:
31+
tag: dev
32+
repository: localhost/redpanda-operator
33+
"""
34+
And cluster "operator-console-upgrade" is available
35+
Then I can helm upgrade "redpanda-operator" "../operator/chart" with values:
36+
"""
37+
image:
38+
tag: dev
39+
repository: localhost/redpanda-operator
40+
crds:
41+
experimental: true
42+
"""
43+
And cluster "operator-console-upgrade" should be stable with 1 nodes
44+
And the migrated console cluster "operator-console-upgrade" should have 0 warnings
45+
46+
@skip:gke @skip:aks @skip:eks
47+
Scenario: Console v2 to v3 with warnings
48+
Given I helm install "redpanda-operator" "redpanda/operator" --version v25.1.3 with values:
49+
"""
50+
"""
51+
And I apply Kubernetes manifest:
52+
"""
53+
---
54+
apiVersion: cluster.redpanda.com/v1alpha2
55+
kind: Redpanda
56+
metadata:
57+
name: operator-console-upgrade-warnings
58+
spec:
59+
clusterSpec:
60+
console:
61+
nameOverride: broken
62+
console:
63+
roleBindings:
64+
- roleName: admin
65+
subjects:
66+
- kind: group
67+
provider: OIDC
68+
name: devs
69+
tls:
70+
enabled: false
71+
external:
72+
enabled: false
73+
statefulset:
74+
replicas: 1
75+
sideCars:
76+
image:
77+
tag: dev
78+
repository: localhost/redpanda-operator
79+
"""
80+
And cluster "operator-console-upgrade-warnings" is available
81+
Then I can helm upgrade "redpanda-operator" "../operator/chart" with values:
82+
"""
83+
image:
84+
tag: dev
85+
repository: localhost/redpanda-operator
86+
crds:
87+
experimental: true
88+
"""
89+
And cluster "operator-console-upgrade-warnings" should be stable with 1 nodes
90+
And the migrated console cluster "operator-console-upgrade-warnings" should have 1 warning

modules/manage/examples/kubernetes/role-crds.feature

Lines changed: 178 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@ Feature: Role CRDs
8383
# In this example manifest, a role CRD called "travis-role" manages ACLs for an existing role.
8484
# The role includes authorization rules that allow reading from topics with names starting with "some-topic".
8585
# This example assumes that you already have a role called "travis-role" in your cluster.
86+
# Note: When the CRD is deleted, the operator will remove both the role and ACLs since it takes full ownership.
8687
---
8788
apiVersion: cluster.redpanda.com/v1alpha2
8889
kind: RedpandaRole
@@ -106,5 +107,180 @@ Feature: Role CRDs
106107
"""
107108
And role "travis-role" is successfully synced
108109
And I delete the CRD role "travis-role"
109-
Then there should still be role "travis-role" in cluster "sasl"
110-
And there should be no ACLs for role "travis-role" in cluster "sasl"
110+
Then there should be no role "travis-role" in cluster "sasl"
111+
112+
@skip:gke @skip:aks @skip:eks
113+
Scenario: Add managed principals to the role
114+
Given there is no role "team-role" in cluster "sasl"
115+
And there are the following pre-existing users in cluster "sasl"
116+
| name | password | mechanism |
117+
| user1 | password | SCRAM-SHA-256 |
118+
| user2 | password | SCRAM-SHA-256 |
119+
| user3 | password | SCRAM-SHA-256 |
120+
When I apply Kubernetes manifest:
121+
"""
122+
apiVersion: cluster.redpanda.com/v1alpha2
123+
kind: RedpandaRole
124+
metadata:
125+
name: team-role
126+
spec:
127+
cluster:
128+
clusterRef:
129+
name: sasl
130+
principals:
131+
- User:user1
132+
- User:user2
133+
"""
134+
And role "team-role" is successfully synced
135+
Then role "team-role" should exist in cluster "sasl"
136+
And role "team-role" should have members "user1 and user2" in cluster "sasl"
137+
And RedpandaRole "team-role" should have status field "managedPrincipals" set to "true"
138+
When I apply Kubernetes manifest:
139+
"""
140+
apiVersion: cluster.redpanda.com/v1alpha2
141+
kind: RedpandaRole
142+
metadata:
143+
name: team-role
144+
spec:
145+
cluster:
146+
clusterRef:
147+
name: sasl
148+
principals:
149+
- User:user1
150+
- User:user2
151+
- User:user3
152+
"""
153+
And role "team-role" is successfully synced
154+
Then role "team-role" should have members "user1 and user2 and user3" in cluster "sasl"
155+
And RedpandaRole "team-role" should have status field "managedPrincipals" set to "true"
156+
157+
@skip:gke @skip:aks @skip:eks
158+
Scenario: Remove managed principals from the role
159+
Given there is no role "shrinking-role" in cluster "sasl"
160+
And there are the following pre-existing users in cluster "sasl"
161+
| name | password | mechanism |
162+
| dev1 | password | SCRAM-SHA-256 |
163+
| dev2 | password | SCRAM-SHA-256 |
164+
| dev3 | password | SCRAM-SHA-256 |
165+
When I apply Kubernetes manifest:
166+
"""
167+
apiVersion: cluster.redpanda.com/v1alpha2
168+
kind: RedpandaRole
169+
metadata:
170+
name: shrinking-role
171+
spec:
172+
cluster:
173+
clusterRef:
174+
name: sasl
175+
principals:
176+
- User:dev1
177+
- User:dev2
178+
- User:dev3
179+
"""
180+
And role "shrinking-role" is successfully synced
181+
Then role "shrinking-role" should have members "dev1 and dev2 and dev3" in cluster "sasl"
182+
And RedpandaRole "shrinking-role" should have status field "managedPrincipals" set to "true"
183+
When I apply Kubernetes manifest:
184+
"""
185+
apiVersion: cluster.redpanda.com/v1alpha2
186+
kind: RedpandaRole
187+
metadata:
188+
name: shrinking-role
189+
spec:
190+
cluster:
191+
clusterRef:
192+
name: sasl
193+
principals:
194+
- User:dev1
195+
"""
196+
And role "shrinking-role" is successfully synced
197+
Then role "shrinking-role" should have members "dev1" in cluster "sasl"
198+
And role "shrinking-role" should not have member "dev2" in cluster "sasl"
199+
And role "shrinking-role" should not have member "dev3" in cluster "sasl"
200+
And RedpandaRole "shrinking-role" should have status field "managedPrincipals" set to "true"
201+
202+
@skip:gke @skip:aks @skip:eks
203+
Scenario: Stop managing principals
204+
Given there is no role "clearing-role" in cluster "sasl"
205+
And there are the following pre-existing users in cluster "sasl"
206+
| name | password | mechanism |
207+
| temp1 | password | SCRAM-SHA-256 |
208+
| temp2 | password | SCRAM-SHA-256 |
209+
When I apply Kubernetes manifest:
210+
"""
211+
apiVersion: cluster.redpanda.com/v1alpha2
212+
kind: RedpandaRole
213+
metadata:
214+
name: clearing-role
215+
spec:
216+
cluster:
217+
clusterRef:
218+
name: sasl
219+
principals:
220+
- User:temp1
221+
- User:temp2
222+
"""
223+
And role "clearing-role" is successfully synced
224+
Then role "clearing-role" should have members "temp1 and temp2" in cluster "sasl"
225+
And RedpandaRole "clearing-role" should have status field "managedPrincipals" set to "true"
226+
When I apply Kubernetes manifest:
227+
"""
228+
apiVersion: cluster.redpanda.com/v1alpha2
229+
kind: RedpandaRole
230+
metadata:
231+
name: clearing-role
232+
spec:
233+
cluster:
234+
clusterRef:
235+
name: sasl
236+
principals: []
237+
"""
238+
And role "clearing-role" is successfully synced
239+
Then RedpandaRole "clearing-role" should have no members in cluster "sasl"
240+
And RedpandaRole "clearing-role" should have status field "managedPrincipals" set to "false"
241+
242+
@skip:gke @skip:aks @skip:eks
243+
Scenario: Replace all managed principals
244+
Given there is no role "swap-role" in cluster "sasl"
245+
And there are the following pre-existing users in cluster "sasl"
246+
| name | password | mechanism |
247+
| olduser1| password | SCRAM-SHA-256 |
248+
| olduser2| password | SCRAM-SHA-256 |
249+
| newuser1| password | SCRAM-SHA-256 |
250+
| newuser2| password | SCRAM-SHA-256 |
251+
When I apply Kubernetes manifest:
252+
"""
253+
apiVersion: cluster.redpanda.com/v1alpha2
254+
kind: RedpandaRole
255+
metadata:
256+
name: swap-role
257+
spec:
258+
cluster:
259+
clusterRef:
260+
name: sasl
261+
principals:
262+
- User:olduser1
263+
- User:olduser2
264+
"""
265+
And role "swap-role" is successfully synced
266+
Then role "swap-role" should have members "olduser1 and olduser2" in cluster "sasl"
267+
And RedpandaRole "swap-role" should have status field "managedPrincipals" set to "true"
268+
When I apply Kubernetes manifest:
269+
"""
270+
apiVersion: cluster.redpanda.com/v1alpha2
271+
kind: RedpandaRole
272+
metadata:
273+
name: swap-role
274+
spec:
275+
cluster:
276+
clusterRef:
277+
name: sasl
278+
principals:
279+
- User:newuser1
280+
- User:newuser2
281+
"""
282+
And role "swap-role" is successfully synced
283+
Then role "swap-role" should have members "newuser1 and newuser2" in cluster "sasl"
284+
And role "swap-role" should not have member "olduser1" in cluster "sasl"
285+
And role "swap-role" should not have member "olduser2" in cluster "sasl"
286+
And RedpandaRole "swap-role" should have status field "managedPrincipals" set to "true"
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
Feature: Upgrading redpanda
2+
@skip:gke @skip:aks @skip:eks @skip:k3d
3+
Scenario: Redpanda upgrade from 25.2.11
4+
Given I apply Kubernetes manifest:
5+
"""
6+
---
7+
apiVersion: cluster.redpanda.com/v1alpha2
8+
kind: Redpanda
9+
metadata:
10+
name: cluster-upgrade
11+
spec:
12+
clusterSpec:
13+
image:
14+
repository: redpandadata/redpanda
15+
tag: v25.2.11
16+
console:
17+
enabled: false
18+
statefulset:
19+
replicas: 3
20+
sideCars:
21+
image:
22+
tag: dev
23+
repository: localhost/redpanda-operator
24+
"""
25+
And cluster "cluster-upgrade" should be stable with 3 nodes
26+
Then I apply Kubernetes manifest:
27+
"""
28+
---
29+
apiVersion: cluster.redpanda.com/v1alpha2
30+
kind: Redpanda
31+
metadata:
32+
name: cluster-upgrade
33+
spec:
34+
clusterSpec:
35+
image:
36+
repository: redpandadata/redpanda-unstable
37+
tag: v25.3.1-rc4
38+
console:
39+
enabled: false
40+
statefulset:
41+
replicas: 3
42+
sideCars:
43+
image:
44+
tag: dev
45+
repository: localhost/redpanda-operator
46+
"""
47+
And cluster "cluster-upgrade" should be stable with 3 nodes

0 commit comments

Comments
 (0)