commit 842045ec4fddbfb2fb5f0b797b121ec8ab186532 Author: Marc-Eric Martel Date: Fri Nov 3 15:48:17 2023 -0400 La Métamorphose diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..5a0642b --- /dev/null +++ b/LICENSE @@ -0,0 +1,336 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + +------------------------------------------------------------------------------- +This project bundles some components that are also licensed under the Apache +License Version 2.0: + +audience-annotations-0.13.0 +caffeine-2.9.3 +commons-beanutils-1.9.4 +commons-cli-1.4 +commons-collections-3.2.2 +commons-digester-2.1 +commons-io-2.11.0 +commons-lang3-3.8.1 +commons-logging-1.2 +commons-validator-1.7 +error_prone_annotations-2.10.0 +jackson-annotations-2.13.5 +jackson-core-2.13.5 +jackson-databind-2.13.5 +jackson-dataformat-csv-2.13.5 +jackson-datatype-jdk8-2.13.5 +jackson-jaxrs-base-2.13.5 +jackson-jaxrs-json-provider-2.13.5 +jackson-module-jaxb-annotations-2.13.5 +jackson-module-scala_2.13-2.13.5 +jackson-module-scala_2.12-2.13.5 +jakarta.validation-api-2.0.2 +javassist-3.29.2-GA +jetty-client-9.4.52.v20230823 +jetty-continuation-9.4.52.v20230823 +jetty-http-9.4.52.v20230823 +jetty-io-9.4.52.v20230823 +jetty-security-9.4.52.v20230823 +jetty-server-9.4.52.v20230823 +jetty-servlet-9.4.52.v20230823 +jetty-servlets-9.4.52.v20230823 +jetty-util-9.4.52.v20230823 +jetty-util-ajax-9.4.52.v20230823 +jose4j-0.9.3 +lz4-java-1.8.0 +maven-artifact-3.8.8 +metrics-core-4.1.12.1 +metrics-core-2.2.0 +netty-buffer-4.1.94.Final +netty-codec-4.1.94.Final +netty-common-4.1.94.Final +netty-handler-4.1.94.Final +netty-resolver-4.1.94.Final +netty-transport-4.1.94.Final +netty-transport-classes-epoll-4.1.94.Final +netty-transport-native-epoll-4.1.94.Final +netty-transport-native-unix-common-4.1.94.Final +plexus-utils-3.3.1 +reflections-0.10.2 +reload4j-1.2.25 +rocksdbjni-7.9.2 +scala-collection-compat_2.13-2.10.0 +scala-library-2.13.11 +scala-logging_2.13-3.9.4 +scala-reflect-2.13.11 +scala-java8-compat_2.13-1.0.2 +snappy-java-1.1.10.4 +swagger-annotations-2.2.8 +zookeeper-3.8.2 +zookeeper-jute-3.8.2 + +=============================================================================== +This product bundles various third-party components under other open source +licenses. This section summarizes those components and their licenses. +See licenses/ for text of these licenses. + +--------------------------------------- +Eclipse Distribution License - v 1.0 +see: licenses/eclipse-distribution-license-1.0 + +jakarta.activation-api-1.2.2 +jakarta.xml.bind-api-2.3.3 + +--------------------------------------- +Eclipse Public License - v 2.0 +see: licenses/eclipse-public-license-2.0 + +jakarta.annotation-api-1.3.5 +jakarta.ws.rs-api-2.1.6 +hk2-api-2.6.1 +hk2-locator-2.6.1 +hk2-utils-2.6.1 +osgi-resource-locator-1.0.3 +aopalliance-repackaged-2.6.1 +jakarta.inject-2.6.1 +jersey-client-2.39.1 +jersey-common-2.39.1 +jersey-container-servlet-2.39.1 +jersey-container-servlet-core-2.39.1 +jersey-hk2-2.39.1 +jersey-server-2.39.1 + +--------------------------------------- +CDDL 1.1 + GPLv2 with classpath exception +see: licenses/CDDL+GPL-1.1 + +javax.activation-api-1.2.0 +javax.annotation-api-1.3.2 +javax.servlet-api-3.1.0 +javax.ws.rs-api-2.1.1 +jaxb-api-2.3.1 +activation-1.1.1 + +--------------------------------------- +MIT License + +argparse4j-0.7.0, see: licenses/argparse-MIT +checker-qual-3.19.0, see: licenses/checker-qual-MIT +jopt-simple-5.0.4, see: licenses/jopt-simple-MIT +slf4j-api-1.7.36, see: licenses/slf4j-MIT +slf4j-reload4j-1.7.36, see: licenses/slf4j-MIT +pcollections-4.0.1, see: licenses/pcollections-MIT + +--------------------------------------- +BSD 2-Clause + +zstd-jni-1.5.5-1 see: licenses/zstd-jni-BSD-2-clause + +--------------------------------------- +BSD 3-Clause + +jline-3.22.0, see: licenses/jline-BSD-3-clause +paranamer-2.8, see: licenses/paranamer-BSD-3-clause + +--------------------------------------- +Do What The F*ck You Want To Public License +see: licenses/DWTFYWTPL + +reflections-0.10.2 diff --git a/NOTICE b/NOTICE new file mode 100644 index 0000000..a50c86d --- /dev/null +++ b/NOTICE @@ -0,0 +1,856 @@ +Apache Kafka +Copyright 2021 The Apache Software Foundation. + +This product includes software developed at +The Apache Software Foundation (https://www.apache.org/). + +This distribution has a binary dependency on jersey, which is available under the CDDL +License. The source code of jersey can be found at https://github.com/jersey/jersey/. + +This distribution has a binary test dependency on jqwik, which is available under +the Eclipse Public License 2.0. The source code can be found at +https://github.com/jlink/jqwik. + +The streams-scala (streams/streams-scala) module was donated by Lightbend and the original code was copyrighted by them: +Copyright (C) 2018 Lightbend Inc. +Copyright (C) 2017-2018 Alexis Seigneurin. + +This project contains the following code copied from Apache Hadoop: +clients/src/main/java/org/apache/kafka/common/utils/PureJavaCrc32C.java +Some portions of this file Copyright (c) 2004-2006 Intel Corporation and licensed under the BSD license. + +This project contains the following code copied from Apache Hive: +streams/src/main/java/org/apache/kafka/streams/state/internals/Murmur3.java + +// ------------------------------------------------------------------ +// NOTICE file corresponding to the section 4d of The Apache License, +// Version 2.0, in this case for +// ------------------------------------------------------------------ + +# Notices for Eclipse GlassFish + +This content is produced and maintained by the Eclipse GlassFish project. + +* Project home: https://projects.eclipse.org/projects/ee4j.glassfish + +## Trademarks + +Eclipse GlassFish, and GlassFish are trademarks of the Eclipse Foundation. + +## Copyright + +All content is the property of the respective authors or their employers. For +more information regarding authorship of content, please consult the listed +source code repository logs. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Public License v. 2.0 which is available at +http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made +available under the following Secondary Licenses when the conditions for such +availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU +General Public License, version 2 with the GNU Classpath Exception which is +available at https://www.gnu.org/software/classpath/license.html. + +SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 + +## Source Code + +The project maintains the following source code repositories: + +* https://github.com/eclipse-ee4j/glassfish-ha-api +* https://github.com/eclipse-ee4j/glassfish-logging-annotation-processor +* https://github.com/eclipse-ee4j/glassfish-shoal +* https://github.com/eclipse-ee4j/glassfish-cdi-porting-tck +* https://github.com/eclipse-ee4j/glassfish-jsftemplating +* https://github.com/eclipse-ee4j/glassfish-hk2-extra +* https://github.com/eclipse-ee4j/glassfish-hk2 +* https://github.com/eclipse-ee4j/glassfish-fighterfish + +## Third-party Content + +This project leverages the following third party content. + +None + +## Cryptography + +Content may contain encryption software. The country in which you are currently +may have restrictions on the import, possession, and use, and/or re-export to +another country, of encryption software. BEFORE using any encryption software, +please check the country's laws, regulations and policies concerning the import, +possession, or use, and re-export of encryption software, to see if this is +permitted. + + +Apache Yetus - Audience Annotations +Copyright 2015-2017 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +Apache Commons CLI +Copyright 2001-2017 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +Apache Commons Lang +Copyright 2001-2018 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +# Jackson JSON processor + +Jackson is a high-performance, Free/Open Source JSON processing library. +It was originally written by Tatu Saloranta (tatu.saloranta@iki.fi), and has +been in development since 2007. +It is currently developed by a community of developers, as well as supported +commercially by FasterXML.com. + +## Licensing + +Jackson core and extension components may licensed under different licenses. +To find the details that apply to this artifact see the accompanying LICENSE file. +For more information, including possible other licensing options, contact +FasterXML.com (http://fasterxml.com). + +## Credits + +A list of contributors may be found from CREDITS file, which is included +in some artifacts (usually source distributions); but is always available +from the source code management (SCM) system project uses. + + +# Notices for Eclipse Project for JAF + +This content is produced and maintained by the Eclipse Project for JAF project. + +* Project home: https://projects.eclipse.org/projects/ee4j.jaf + +## Copyright + +All content is the property of the respective authors or their employers. For +more information regarding authorship of content, please consult the listed +source code repository logs. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Distribution License v. 1.0, +which is available at http://www.eclipse.org/org/documents/edl-v10.php. + +SPDX-License-Identifier: BSD-3-Clause + +## Source Code + +The project maintains the following source code repositories: + +* https://github.com/eclipse-ee4j/jaf + +## Third-party Content + +This project leverages the following third party content. + +JUnit (4.12) + +* License: Eclipse Public License + + +# Notices for Jakarta Annotations + +This content is produced and maintained by the Jakarta Annotations project. + + * Project home: https://projects.eclipse.org/projects/ee4j.ca + +## Trademarks + +Jakarta Annotations is a trademark of the Eclipse Foundation. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Public License v. 2.0 which is available at +http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made +available under the following Secondary Licenses when the conditions for such +availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU +General Public License, version 2 with the GNU Classpath Exception which is +available at https://www.gnu.org/software/classpath/license.html. + +SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 + +## Source Code + +The project maintains the following source code repositories: + + * https://github.com/eclipse-ee4j/common-annotations-api + +## Third-party Content + +## Cryptography + +Content may contain encryption software. The country in which you are currently +may have restrictions on the import, possession, and use, and/or re-export to +another country, of encryption software. BEFORE using any encryption software, +please check the country's laws, regulations and policies concerning the import, +possession, or use, and re-export of encryption software, to see if this is +permitted. + + +# Notices for the Jakarta RESTful Web Services Project + +This content is produced and maintained by the **Jakarta RESTful Web Services** +project. + +* Project home: https://projects.eclipse.org/projects/ee4j.jaxrs + +## Trademarks + +**Jakarta RESTful Web Services** is a trademark of the Eclipse Foundation. + +## Copyright + +All content is the property of the respective authors or their employers. For +more information regarding authorship of content, please consult the listed +source code repository logs. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Public License v. 2.0 which is available at +http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made +available under the following Secondary Licenses when the conditions for such +availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU +General Public License, version 2 with the GNU Classpath Exception which is +available at https://www.gnu.org/software/classpath/license.html. + +SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 + +## Source Code + +The project maintains the following source code repositories: + +* https://github.com/eclipse-ee4j/jaxrs-api + +## Third-party Content + +This project leverages the following third party content. + +javaee-api (7.0) + +* License: Apache-2.0 AND W3C + +JUnit (4.11) + +* License: Common Public License 1.0 + +Mockito (2.16.0) + +* Project: http://site.mockito.org +* Source: https://github.com/mockito/mockito/releases/tag/v2.16.0 + +## Cryptography + +Content may contain encryption software. The country in which you are currently +may have restrictions on the import, possession, and use, and/or re-export to +another country, of encryption software. BEFORE using any encryption software, +please check the country's laws, regulations and policies concerning the import, +possession, or use, and re-export of encryption software, to see if this is +permitted. + + +# Notices for Eclipse Project for JAXB + +This content is produced and maintained by the Eclipse Project for JAXB project. + +* Project home: https://projects.eclipse.org/projects/ee4j.jaxb + +## Trademarks + +Eclipse Project for JAXB is a trademark of the Eclipse Foundation. + +## Copyright + +All content is the property of the respective authors or their employers. For +more information regarding authorship of content, please consult the listed +source code repository logs. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Distribution License v. 1.0 which is available +at http://www.eclipse.org/org/documents/edl-v10.php. + +SPDX-License-Identifier: BSD-3-Clause + +## Source Code + +The project maintains the following source code repositories: + +* https://github.com/eclipse-ee4j/jaxb-api + +## Third-party Content + +This project leverages the following third party content. + +None + +## Cryptography + +Content may contain encryption software. The country in which you are currently +may have restrictions on the import, possession, and use, and/or re-export to +another country, of encryption software. BEFORE using any encryption software, +please check the country's laws, regulations and policies concerning the import, +possession, or use, and re-export of encryption software, to see if this is +permitted. + + +# Notice for Jersey +This content is produced and maintained by the Eclipse Jersey project. + +* Project home: https://projects.eclipse.org/projects/ee4j.jersey + +## Trademarks +Eclipse Jersey is a trademark of the Eclipse Foundation. + +## Copyright + +All content is the property of the respective authors or their employers. For +more information regarding authorship of content, please consult the listed +source code repository logs. + +## Declared Project Licenses + +This program and the accompanying materials are made available under the terms +of the Eclipse Public License v. 2.0 which is available at +http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made +available under the following Secondary Licenses when the conditions for such +availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU +General Public License, version 2 with the GNU Classpath Exception which is +available at https://www.gnu.org/software/classpath/license.html. + +SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 + +## Source Code +The project maintains the following source code repositories: + +* https://github.com/eclipse-ee4j/jersey + +## Third-party Content + +Angular JS, v1.6.6 +* License MIT (http://www.opensource.org/licenses/mit-license.php) +* Project: http://angularjs.org +* Coyright: (c) 2010-2017 Google, Inc. + +aopalliance Version 1 +* License: all the source code provided by AOP Alliance is Public Domain. +* Project: http://aopalliance.sourceforge.net +* Copyright: Material in the public domain is not protected by copyright + +Bean Validation API 2.0.2 +* License: Apache License, 2.0 +* Project: http://beanvalidation.org/1.1/ +* Copyright: 2009, Red Hat, Inc. and/or its affiliates, and individual contributors +* by the @authors tag. + +Hibernate Validator CDI, 6.1.2.Final +* License: Apache License, 2.0 +* Project: https://beanvalidation.org/ +* Repackaged in org.glassfish.jersey.server.validation.internal.hibernate + +Bootstrap v3.3.7 +* License: MIT license (https://github.com/twbs/bootstrap/blob/master/LICENSE) +* Project: http://getbootstrap.com +* Copyright: 2011-2016 Twitter, Inc + +Google Guava Version 18.0 +* License: Apache License, 2.0 +* Copyright (C) 2009 The Guava Authors + +javax.inject Version: 1 +* License: Apache License, 2.0 +* Copyright (C) 2009 The JSR-330 Expert Group + +Javassist Version 3.25.0-GA +* License: Apache License, 2.0 +* Project: http://www.javassist.org/ +* Copyright (C) 1999- Shigeru Chiba. All Rights Reserved. + +Jackson JAX-RS Providers Version 2.10.1 +* License: Apache License, 2.0 +* Project: https://github.com/FasterXML/jackson-jaxrs-providers +* Copyright: (c) 2009-2011 FasterXML, LLC. All rights reserved unless otherwise indicated. + +jQuery v1.12.4 +* License: jquery.org/license +* Project: jquery.org +* Copyright: (c) jQuery Foundation + +jQuery Barcode plugin 0.3 +* License: MIT & GPL (http://www.opensource.org/licenses/mit-license.php & http://www.gnu.org/licenses/gpl.html) +* Project: http://www.pasella.it/projects/jQuery/barcode +* Copyright: (c) 2009 Antonello Pasella antonello.pasella@gmail.com + +JSR-166 Extension - JEP 266 +* License: CC0 +* No copyright +* Written by Doug Lea with assistance from members of JCP JSR-166 Expert Group and released to the public domain, as explained at http://creativecommons.org/publicdomain/zero/1.0/ + +KineticJS, v4.7.1 +* License: MIT license (http://www.opensource.org/licenses/mit-license.php) +* Project: http://www.kineticjs.com, https://github.com/ericdrowell/KineticJS +* Copyright: Eric Rowell + +org.objectweb.asm Version 8.0 +* License: Modified BSD (http://asm.objectweb.org/license.html) +* Copyright (c) 2000-2011 INRIA, France Telecom. All rights reserved. + +org.osgi.core version 6.0.0 +* License: Apache License, 2.0 +* Copyright (c) OSGi Alliance (2005, 2008). All Rights Reserved. + +org.glassfish.jersey.server.internal.monitoring.core +* License: Apache License, 2.0 +* Copyright (c) 2015-2018 Oracle and/or its affiliates. All rights reserved. +* Copyright 2010-2013 Coda Hale and Yammer, Inc. + +W3.org documents +* License: W3C License +* Copyright: Copyright (c) 1994-2001 World Wide Web Consortium, (Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University). All Rights Reserved. http://www.w3.org/Consortium/Legal/ + + +============================================================== + Jetty Web Container + Copyright 1995-2018 Mort Bay Consulting Pty Ltd. +============================================================== + +The Jetty Web Container is Copyright Mort Bay Consulting Pty Ltd +unless otherwise noted. + +Jetty is dual licensed under both + + * The Apache 2.0 License + http://www.apache.org/licenses/LICENSE-2.0.html + + and + + * The Eclipse Public 1.0 License + http://www.eclipse.org/legal/epl-v10.html + +Jetty may be distributed under either license. + +------ +Eclipse + +The following artifacts are EPL. + * org.eclipse.jetty.orbit:org.eclipse.jdt.core + +The following artifacts are EPL and ASL2. + * org.eclipse.jetty.orbit:javax.security.auth.message + + +The following artifacts are EPL and CDDL 1.0. + * org.eclipse.jetty.orbit:javax.mail.glassfish + + +------ +Oracle + +The following artifacts are CDDL + GPLv2 with classpath exception. +https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html + + * javax.servlet:javax.servlet-api + * javax.annotation:javax.annotation-api + * javax.transaction:javax.transaction-api + * javax.websocket:javax.websocket-api + +------ +Oracle OpenJDK + +If ALPN is used to negotiate HTTP/2 connections, then the following +artifacts may be included in the distribution or downloaded when ALPN +module is selected. + + * java.sun.security.ssl + +These artifacts replace/modify OpenJDK classes. The modififications +are hosted at github and both modified and original are under GPL v2 with +classpath exceptions. +http://openjdk.java.net/legal/gplv2+ce.html + + +------ +OW2 + +The following artifacts are licensed by the OW2 Foundation according to the +terms of http://asm.ow2.org/license.html + +org.ow2.asm:asm-commons +org.ow2.asm:asm + + +------ +Apache + +The following artifacts are ASL2 licensed. + +org.apache.taglibs:taglibs-standard-spec +org.apache.taglibs:taglibs-standard-impl + + +------ +MortBay + +The following artifacts are ASL2 licensed. Based on selected classes from +following Apache Tomcat jars, all ASL2 licensed. + +org.mortbay.jasper:apache-jsp + org.apache.tomcat:tomcat-jasper + org.apache.tomcat:tomcat-juli + org.apache.tomcat:tomcat-jsp-api + org.apache.tomcat:tomcat-el-api + org.apache.tomcat:tomcat-jasper-el + org.apache.tomcat:tomcat-api + org.apache.tomcat:tomcat-util-scan + org.apache.tomcat:tomcat-util + +org.mortbay.jasper:apache-el + org.apache.tomcat:tomcat-jasper-el + org.apache.tomcat:tomcat-el-api + + +------ +Mortbay + +The following artifacts are CDDL + GPLv2 with classpath exception. + +https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html + +org.eclipse.jetty.toolchain:jetty-schemas + +------ +Assorted + +The UnixCrypt.java code implements the one way cryptography used by +Unix systems for simple password protection. Copyright 1996 Aki Yoshida, +modified April 2001 by Iris Van den Broeke, Daniel Deville. +Permission to use, copy, modify and distribute UnixCrypt +for non-commercial or commercial purposes and without fee is +granted provided that the copyright notice appears in all copies. + + +Apache log4j +Copyright 2007 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +Maven Artifact +Copyright 2001-2019 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +This product includes software developed by the Indiana University + Extreme! Lab (http://www.extreme.indiana.edu/). + +This product includes software developed by +The Apache Software Foundation (http://www.apache.org/). + +This product includes software developed by +ThoughtWorks (http://www.thoughtworks.com). + +This product includes software developed by +javolution (http://javolution.org/). + +This product includes software developed by +Rome (https://rome.dev.java.net/). + + +Scala +Copyright (c) 2002-2020 EPFL +Copyright (c) 2011-2020 Lightbend, Inc. + +Scala includes software developed at +LAMP/EPFL (https://lamp.epfl.ch/) and +Lightbend, Inc. (https://www.lightbend.com/). + +Licensed under the Apache License, Version 2.0 (the "License"). +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +This software includes projects with other licenses -- see `doc/LICENSE.md`. + + +Apache ZooKeeper - Server +Copyright 2008-2021 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +Apache ZooKeeper - Jute +Copyright 2008-2021 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + +The Netty Project + ================= + +Please visit the Netty web site for more information: + + * https://netty.io/ + +Copyright 2014 The Netty Project + +The Netty Project licenses this file to you under the Apache License, +version 2.0 (the "License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at: + + https://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +License for the specific language governing permissions and limitations +under the License. + +Also, please refer to each LICENSE..txt file, which is located in +the 'license' directory of the distribution file, for the license terms of the +components that this product depends on. + +------------------------------------------------------------------------------- +This product contains the extensions to Java Collections Framework which has +been derived from the works by JSR-166 EG, Doug Lea, and Jason T. Greene: + + * LICENSE: + * license/LICENSE.jsr166y.txt (Public Domain) + * HOMEPAGE: + * http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/ + * http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbosscache/experimental/jsr166/ + +This product contains a modified version of Robert Harder's Public Domain +Base64 Encoder and Decoder, which can be obtained at: + + * LICENSE: + * license/LICENSE.base64.txt (Public Domain) + * HOMEPAGE: + * http://iharder.sourceforge.net/current/java/base64/ + +This product contains a modified portion of 'Webbit', an event based +WebSocket and HTTP server, which can be obtained at: + + * LICENSE: + * license/LICENSE.webbit.txt (BSD License) + * HOMEPAGE: + * https://github.com/joewalnes/webbit + +This product contains a modified portion of 'SLF4J', a simple logging +facade for Java, which can be obtained at: + + * LICENSE: + * license/LICENSE.slf4j.txt (MIT License) + * HOMEPAGE: + * https://www.slf4j.org/ + +This product contains a modified portion of 'Apache Harmony', an open source +Java SE, which can be obtained at: + + * NOTICE: + * license/NOTICE.harmony.txt + * LICENSE: + * license/LICENSE.harmony.txt (Apache License 2.0) + * HOMEPAGE: + * https://archive.apache.org/dist/harmony/ + +This product contains a modified portion of 'jbzip2', a Java bzip2 compression +and decompression library written by Matthew J. Francis. It can be obtained at: + + * LICENSE: + * license/LICENSE.jbzip2.txt (MIT License) + * HOMEPAGE: + * https://code.google.com/p/jbzip2/ + +This product contains a modified portion of 'libdivsufsort', a C API library to construct +the suffix array and the Burrows-Wheeler transformed string for any input string of +a constant-size alphabet written by Yuta Mori. It can be obtained at: + + * LICENSE: + * license/LICENSE.libdivsufsort.txt (MIT License) + * HOMEPAGE: + * https://github.com/y-256/libdivsufsort + +This product contains a modified portion of Nitsan Wakart's 'JCTools', Java Concurrency Tools for the JVM, + which can be obtained at: + + * LICENSE: + * license/LICENSE.jctools.txt (ASL2 License) + * HOMEPAGE: + * https://github.com/JCTools/JCTools + +This product optionally depends on 'JZlib', a re-implementation of zlib in +pure Java, which can be obtained at: + + * LICENSE: + * license/LICENSE.jzlib.txt (BSD style License) + * HOMEPAGE: + * http://www.jcraft.com/jzlib/ + +This product optionally depends on 'Compress-LZF', a Java library for encoding and +decoding data in LZF format, written by Tatu Saloranta. It can be obtained at: + + * LICENSE: + * license/LICENSE.compress-lzf.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/ning/compress + +This product optionally depends on 'lz4', a LZ4 Java compression +and decompression library written by Adrien Grand. It can be obtained at: + + * LICENSE: + * license/LICENSE.lz4.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/jpountz/lz4-java + +This product optionally depends on 'lzma-java', a LZMA Java compression +and decompression library, which can be obtained at: + + * LICENSE: + * license/LICENSE.lzma-java.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/jponge/lzma-java + +This product contains a modified portion of 'jfastlz', a Java port of FastLZ compression +and decompression library written by William Kinney. It can be obtained at: + + * LICENSE: + * license/LICENSE.jfastlz.txt (MIT License) + * HOMEPAGE: + * https://code.google.com/p/jfastlz/ + +This product contains a modified portion of and optionally depends on 'Protocol Buffers', Google's data +interchange format, which can be obtained at: + + * LICENSE: + * license/LICENSE.protobuf.txt (New BSD License) + * HOMEPAGE: + * https://github.com/google/protobuf + +This product optionally depends on 'Bouncy Castle Crypto APIs' to generate +a temporary self-signed X.509 certificate when the JVM does not provide the +equivalent functionality. It can be obtained at: + + * LICENSE: + * license/LICENSE.bouncycastle.txt (MIT License) + * HOMEPAGE: + * https://www.bouncycastle.org/ + +This product optionally depends on 'Snappy', a compression library produced +by Google Inc, which can be obtained at: + + * LICENSE: + * license/LICENSE.snappy.txt (New BSD License) + * HOMEPAGE: + * https://github.com/google/snappy + +This product optionally depends on 'JBoss Marshalling', an alternative Java +serialization API, which can be obtained at: + + * LICENSE: + * license/LICENSE.jboss-marshalling.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/jboss-remoting/jboss-marshalling + +This product optionally depends on 'Caliper', Google's micro- +benchmarking framework, which can be obtained at: + + * LICENSE: + * license/LICENSE.caliper.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/google/caliper + +This product optionally depends on 'Apache Commons Logging', a logging +framework, which can be obtained at: + + * LICENSE: + * license/LICENSE.commons-logging.txt (Apache License 2.0) + * HOMEPAGE: + * https://commons.apache.org/logging/ + +This product optionally depends on 'Apache Log4J', a logging framework, which +can be obtained at: + + * LICENSE: + * license/LICENSE.log4j.txt (Apache License 2.0) + * HOMEPAGE: + * https://logging.apache.org/log4j/ + +This product optionally depends on 'Aalto XML', an ultra-high performance +non-blocking XML processor, which can be obtained at: + + * LICENSE: + * license/LICENSE.aalto-xml.txt (Apache License 2.0) + * HOMEPAGE: + * http://wiki.fasterxml.com/AaltoHome + +This product contains a modified version of 'HPACK', a Java implementation of +the HTTP/2 HPACK algorithm written by Twitter. It can be obtained at: + + * LICENSE: + * license/LICENSE.hpack.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/twitter/hpack + +This product contains a modified version of 'HPACK', a Java implementation of +the HTTP/2 HPACK algorithm written by Cory Benfield. It can be obtained at: + + * LICENSE: + * license/LICENSE.hyper-hpack.txt (MIT License) + * HOMEPAGE: + * https://github.com/python-hyper/hpack/ + +This product contains a modified version of 'HPACK', a Java implementation of +the HTTP/2 HPACK algorithm written by Tatsuhiro Tsujikawa. It can be obtained at: + + * LICENSE: + * license/LICENSE.nghttp2-hpack.txt (MIT License) + * HOMEPAGE: + * https://github.com/nghttp2/nghttp2/ + +This product contains a modified portion of 'Apache Commons Lang', a Java library +provides utilities for the java.lang API, which can be obtained at: + + * LICENSE: + * license/LICENSE.commons-lang.txt (Apache License 2.0) + * HOMEPAGE: + * https://commons.apache.org/proper/commons-lang/ + + +This product contains the Maven wrapper scripts from 'Maven Wrapper', that provides an easy way to ensure a user has everything necessary to run the Maven build. + + * LICENSE: + * license/LICENSE.mvn-wrapper.txt (Apache License 2.0) + * HOMEPAGE: + * https://github.com/takari/maven-wrapper + +This product contains the dnsinfo.h header file, that provides a way to retrieve the system DNS configuration on MacOS. +This private header is also used by Apple's open source + mDNSResponder (https://opensource.apple.com/tarballs/mDNSResponder/). + + * LICENSE: + * license/LICENSE.dnsinfo.txt (Apple Public Source License 2.0) + * HOMEPAGE: + * https://www.opensource.apple.com/source/configd/configd-453.19/dnsinfo/dnsinfo.h \ No newline at end of file diff --git a/bin/connect-distributed.sh b/bin/connect-distributed.sh new file mode 100755 index 0000000..b8088ad --- /dev/null +++ b/bin/connect-distributed.sh @@ -0,0 +1,45 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] connect-distributed.properties" + exit 1 +fi + +base_dir=$(dirname $0) + +if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then + export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties" +fi + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G" +fi + +EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'} + +COMMAND=$1 +case $COMMAND in + -daemon) + EXTRA_ARGS="-daemon "$EXTRA_ARGS + shift + ;; + *) + ;; +esac + +exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectDistributed "$@" diff --git a/bin/connect-mirror-maker.sh b/bin/connect-mirror-maker.sh new file mode 100755 index 0000000..8e2b2e1 --- /dev/null +++ b/bin/connect-mirror-maker.sh @@ -0,0 +1,45 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] mm2.properties" + exit 1 +fi + +base_dir=$(dirname $0) + +if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then + export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties" +fi + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G" +fi + +EXTRA_ARGS=${EXTRA_ARGS-'-name mirrorMaker'} + +COMMAND=$1 +case $COMMAND in + -daemon) + EXTRA_ARGS="-daemon "$EXTRA_ARGS + shift + ;; + *) + ;; +esac + +exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.mirror.MirrorMaker "$@" diff --git a/bin/connect-plugin-path.sh b/bin/connect-plugin-path.sh new file mode 100755 index 0000000..5074206 --- /dev/null +++ b/bin/connect-plugin-path.sh @@ -0,0 +1,21 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G" +fi + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ConnectPluginPath "$@" diff --git a/bin/connect-standalone.sh b/bin/connect-standalone.sh new file mode 100755 index 0000000..441069f --- /dev/null +++ b/bin/connect-standalone.sh @@ -0,0 +1,45 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] connect-standalone.properties" + exit 1 +fi + +base_dir=$(dirname $0) + +if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then + export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/connect-log4j.properties" +fi + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xms256M -Xmx2G" +fi + +EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone'} + +COMMAND=$1 +case $COMMAND in + -daemon) + EXTRA_ARGS="-daemon "$EXTRA_ARGS + shift + ;; + *) + ;; +esac + +exec $(dirname $0)/kafka-run-class.sh $EXTRA_ARGS org.apache.kafka.connect.cli.ConnectStandalone "$@" diff --git a/bin/kafka-acls.sh b/bin/kafka-acls.sh new file mode 100755 index 0000000..8fa6554 --- /dev/null +++ b/bin/kafka-acls.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.AclCommand "$@" diff --git a/bin/kafka-broker-api-versions.sh b/bin/kafka-broker-api-versions.sh new file mode 100755 index 0000000..4f560a0 --- /dev/null +++ b/bin/kafka-broker-api-versions.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.BrokerApiVersionsCommand "$@" diff --git a/bin/kafka-cluster.sh b/bin/kafka-cluster.sh new file mode 100755 index 0000000..f09858c --- /dev/null +++ b/bin/kafka-cluster.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ClusterTool "$@" diff --git a/bin/kafka-configs.sh b/bin/kafka-configs.sh new file mode 100755 index 0000000..2f9eb8c --- /dev/null +++ b/bin/kafka-configs.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.ConfigCommand "$@" diff --git a/bin/kafka-console-consumer.sh b/bin/kafka-console-consumer.sh new file mode 100755 index 0000000..dbaac2b --- /dev/null +++ b/bin/kafka-console-consumer.sh @@ -0,0 +1,21 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi + +exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@" diff --git a/bin/kafka-console-producer.sh b/bin/kafka-console-producer.sh new file mode 100755 index 0000000..e5187b8 --- /dev/null +++ b/bin/kafka-console-producer.sh @@ -0,0 +1,20 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi +exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" diff --git a/bin/kafka-consumer-groups.sh b/bin/kafka-consumer-groups.sh new file mode 100755 index 0000000..feb063d --- /dev/null +++ b/bin/kafka-consumer-groups.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.ConsumerGroupCommand "$@" diff --git a/bin/kafka-consumer-perf-test.sh b/bin/kafka-consumer-perf-test.sh new file mode 100755 index 0000000..4eebe87 --- /dev/null +++ b/bin/kafka-consumer-perf-test.sh @@ -0,0 +1,20 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ConsumerPerformance "$@" diff --git a/bin/kafka-delegation-tokens.sh b/bin/kafka-delegation-tokens.sh new file mode 100755 index 0000000..9f8bb13 --- /dev/null +++ b/bin/kafka-delegation-tokens.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.DelegationTokenCommand "$@" diff --git a/bin/kafka-delete-records.sh b/bin/kafka-delete-records.sh new file mode 100755 index 0000000..e9db8f9 --- /dev/null +++ b/bin/kafka-delete-records.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.DeleteRecordsCommand "$@" diff --git a/bin/kafka-dump-log.sh b/bin/kafka-dump-log.sh new file mode 100755 index 0000000..a97ea7d --- /dev/null +++ b/bin/kafka-dump-log.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.tools.DumpLogSegments "$@" diff --git a/bin/kafka-e2e-latency.sh b/bin/kafka-e2e-latency.sh new file mode 100755 index 0000000..32d1063 --- /dev/null +++ b/bin/kafka-e2e-latency.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.EndToEndLatency "$@" diff --git a/bin/kafka-features.sh b/bin/kafka-features.sh new file mode 100755 index 0000000..8d90a06 --- /dev/null +++ b/bin/kafka-features.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.FeatureCommand "$@" diff --git a/bin/kafka-get-offsets.sh b/bin/kafka-get-offsets.sh new file mode 100755 index 0000000..993a202 --- /dev/null +++ b/bin/kafka-get-offsets.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.tools.GetOffsetShell "$@" diff --git a/bin/kafka-jmx.sh b/bin/kafka-jmx.sh new file mode 100755 index 0000000..88b3874 --- /dev/null +++ b/bin/kafka-jmx.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.JmxTool "$@" diff --git a/bin/kafka-leader-election.sh b/bin/kafka-leader-election.sh new file mode 100755 index 0000000..88baef3 --- /dev/null +++ b/bin/kafka-leader-election.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.LeaderElectionCommand "$@" diff --git a/bin/kafka-log-dirs.sh b/bin/kafka-log-dirs.sh new file mode 100755 index 0000000..9894d69 --- /dev/null +++ b/bin/kafka-log-dirs.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.LogDirsCommand "$@" diff --git a/bin/kafka-metadata-quorum.sh b/bin/kafka-metadata-quorum.sh new file mode 100755 index 0000000..3b25c7d --- /dev/null +++ b/bin/kafka-metadata-quorum.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.MetadataQuorumCommand "$@" diff --git a/bin/kafka-metadata-shell.sh b/bin/kafka-metadata-shell.sh new file mode 100755 index 0000000..289f0c1 --- /dev/null +++ b/bin/kafka-metadata-shell.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.shell.MetadataShell "$@" diff --git a/bin/kafka-mirror-maker.sh b/bin/kafka-mirror-maker.sh new file mode 100755 index 0000000..981f271 --- /dev/null +++ b/bin/kafka-mirror-maker.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.tools.MirrorMaker "$@" diff --git a/bin/kafka-producer-perf-test.sh b/bin/kafka-producer-perf-test.sh new file mode 100755 index 0000000..73a6288 --- /dev/null +++ b/bin/kafka-producer-perf-test.sh @@ -0,0 +1,20 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ProducerPerformance "$@" diff --git a/bin/kafka-reassign-partitions.sh b/bin/kafka-reassign-partitions.sh new file mode 100755 index 0000000..4c7f1bc --- /dev/null +++ b/bin/kafka-reassign-partitions.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.ReassignPartitionsCommand "$@" diff --git a/bin/kafka-replica-verification.sh b/bin/kafka-replica-verification.sh new file mode 100755 index 0000000..1df5639 --- /dev/null +++ b/bin/kafka-replica-verification.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.ReplicaVerificationTool "$@" diff --git a/bin/kafka-run-class.sh b/bin/kafka-run-class.sh new file mode 100755 index 0000000..9ab96d7 --- /dev/null +++ b/bin/kafka-run-class.sh @@ -0,0 +1,347 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]" + exit 1 +fi + +# CYGWIN == 1 if Cygwin is detected, else 0. +if [[ $(uname -a) =~ "CYGWIN" ]]; then + CYGWIN=1 +else + CYGWIN=0 +fi + +if [ -z "$INCLUDE_TEST_JARS" ]; then + INCLUDE_TEST_JARS=false +fi + +# Exclude jars not necessary for running commands. +regex="(-(test|test-sources|src|scaladoc|javadoc)\.jar|jar.asc|connect-file.*\.jar)$" +should_include_file() { + if [ "$INCLUDE_TEST_JARS" = true ]; then + return 0 + fi + file=$1 + if [ -z "$(echo "$file" | grep -E "$regex")" ] ; then + return 0 + else + return 1 + fi +} + +base_dir=$(dirname $0)/.. + +if [ -z "$SCALA_VERSION" ]; then + SCALA_VERSION=2.13.11 + if [[ -f "$base_dir/gradle.properties" ]]; then + SCALA_VERSION=`grep "^scalaVersion=" "$base_dir/gradle.properties" | cut -d= -f 2` + fi +fi + +if [ -z "$SCALA_BINARY_VERSION" ]; then + SCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d '.') +fi + +# run ./gradlew copyDependantLibs to get all dependant jars in a local dir +shopt -s nullglob +if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then + for dir in "$base_dir"/core/build/dependant-libs-${SCALA_VERSION}*; + do + CLASSPATH="$CLASSPATH:$dir/*" + done +fi + +for file in "$base_dir"/examples/build/libs/kafka-examples*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then + clients_lib_dir=$(dirname $0)/../clients/build/libs + streams_lib_dir=$(dirname $0)/../streams/build/libs + streams_dependant_clients_lib_dir=$(dirname $0)/../streams/build/dependant-libs-${SCALA_VERSION} +else + clients_lib_dir=/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs + streams_lib_dir=$clients_lib_dir + streams_dependant_clients_lib_dir=$streams_lib_dir +fi + + +for file in "$clients_lib_dir"/kafka-clients*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +for file in "$streams_lib_dir"/kafka-streams*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then + for file in "$base_dir"/streams/examples/build/libs/kafka-streams-examples*.jar; + do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi + done +else + VERSION_NO_DOTS=`echo $UPGRADE_KAFKA_STREAMS_TEST_VERSION | sed 's/\.//g'` + SHORT_VERSION_NO_DOTS=${VERSION_NO_DOTS:0:((${#VERSION_NO_DOTS} - 1))} # remove last char, ie, bug-fix number + for file in "$base_dir"/streams/upgrade-system-tests-$SHORT_VERSION_NO_DOTS/build/libs/kafka-streams-upgrade-system-tests*.jar; + do + if should_include_file "$file"; then + CLASSPATH="$file":"$CLASSPATH" + fi + done + if [ "$SHORT_VERSION_NO_DOTS" = "0100" ]; then + CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.8.jar":"$CLASSPATH" + CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.6.jar":"$CLASSPATH" + fi + if [ "$SHORT_VERSION_NO_DOTS" = "0101" ]; then + CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.9.jar":"$CLASSPATH" + CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.8.jar":"$CLASSPATH" + fi +fi + +for file in "$streams_dependant_clients_lib_dir"/rocksdb*.jar; +do + CLASSPATH="$CLASSPATH":"$file" +done + +for file in "$streams_dependant_clients_lib_dir"/*hamcrest*.jar; +do + CLASSPATH="$CLASSPATH":"$file" +done + +for file in "$base_dir"/shell/build/libs/kafka-shell*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +for dir in "$base_dir"/shell/build/dependant-libs-${SCALA_VERSION}*; +do + CLASSPATH="$CLASSPATH:$dir/*" +done + +for file in "$base_dir"/tools/build/libs/kafka-tools*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +for dir in "$base_dir"/tools/build/dependant-libs-${SCALA_VERSION}*; +do + CLASSPATH="$CLASSPATH:$dir/*" +done + +for file in "$base_dir"/trogdor/build/libs/trogdor-*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +for dir in "$base_dir"/trogdor/build/dependant-libs-${SCALA_VERSION}*; +do + CLASSPATH="$CLASSPATH:$dir/*" +done + +for cc_pkg in "api" "transforms" "runtime" "mirror" "mirror-client" "json" "tools" "basic-auth-extension" +do + for file in "$base_dir"/connect/${cc_pkg}/build/libs/connect-${cc_pkg}*.jar; + do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi + done + if [ -d "$base_dir/connect/${cc_pkg}/build/dependant-libs" ] ; then + CLASSPATH="$CLASSPATH:$base_dir/connect/${cc_pkg}/build/dependant-libs/*" + fi +done + +# classpath addition for release +for file in "$base_dir"/libs/*; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done + +for file in "$base_dir"/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar; +do + if should_include_file "$file"; then + CLASSPATH="$CLASSPATH":"$file" + fi +done +shopt -u nullglob + +if [ -z "$CLASSPATH" ] ; then + echo "Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=$SCALA_VERSION'" + exit 1 +fi + +# JMX settings +if [ -z "$KAFKA_JMX_OPTS" ]; then + KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false " +fi + +# JMX port to use +if [ $JMX_PORT ]; then + KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT " + if ! echo "$KAFKA_JMX_OPTS" | grep -qF -- '-Dcom.sun.management.jmxremote.rmi.port=' ; then + # If unset, set the RMI port to address issues with monitoring Kafka running in containers + KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT" + fi +fi + +# Log directory to use +if [ "x$LOG_DIR" = "x" ]; then + LOG_DIR="$base_dir/logs" +fi + +# Log4j settings +if [ -z "$KAFKA_LOG4J_OPTS" ]; then + # Log to console. This is a tool. + LOG4J_DIR="$base_dir/config/tools-log4j.properties" + # If Cygwin is detected, LOG4J_DIR is converted to Windows format. + (( CYGWIN )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}") + KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}" +else + # create logs directory + if [ ! -d "$LOG_DIR" ]; then + mkdir -p "$LOG_DIR" + fi +fi + +# If Cygwin is detected, LOG_DIR is converted to Windows format. +(( CYGWIN )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}") +KAFKA_LOG4J_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS" + +# Generic jvm settings you want to add +if [ -z "$KAFKA_OPTS" ]; then + KAFKA_OPTS="" +fi + +# Set Debug options if enabled +if [ "x$KAFKA_DEBUG" != "x" ]; then + + # Use default ports + DEFAULT_JAVA_DEBUG_PORT="5005" + + if [ -z "$JAVA_DEBUG_PORT" ]; then + JAVA_DEBUG_PORT="$DEFAULT_JAVA_DEBUG_PORT" + fi + + # Use the defaults if JAVA_DEBUG_OPTS was not set + DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=$JAVA_DEBUG_PORT" + if [ -z "$JAVA_DEBUG_OPTS" ]; then + JAVA_DEBUG_OPTS="$DEFAULT_JAVA_DEBUG_OPTS" + fi + + echo "Enabling Java debug options: $JAVA_DEBUG_OPTS" + KAFKA_OPTS="$JAVA_DEBUG_OPTS $KAFKA_OPTS" +fi + +# Which java to use +if [ -z "$JAVA_HOME" ]; then + JAVA="java" +else + JAVA="$JAVA_HOME/bin/java" +fi + +# Memory options +if [ -z "$KAFKA_HEAP_OPTS" ]; then + KAFKA_HEAP_OPTS="-Xmx256M" +fi + +# JVM performance options +# MaxInlineLevel=15 is the default since JDK 14 and can be removed once older JDKs are no longer supported +if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then + KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true" +fi + +while [ $# -gt 0 ]; do + COMMAND=$1 + case $COMMAND in + -name) + DAEMON_NAME=$2 + CONSOLE_OUTPUT_FILE=$LOG_DIR/$DAEMON_NAME.out + shift 2 + ;; + -loggc) + if [ -z "$KAFKA_GC_LOG_OPTS" ]; then + GC_LOG_ENABLED="true" + fi + shift + ;; + -daemon) + DAEMON_MODE="true" + shift + ;; + *) + break + ;; + esac +done + +# GC options +GC_FILE_SUFFIX='-gc.log' +GC_LOG_FILE_NAME='' +if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then + GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX + + # The first segment of the version number, which is '1' for releases before Java 9 + # it then becomes '9', '10', ... + # Some examples of the first line of `java --version`: + # 8 -> java version "1.8.0_152" + # 9.0.4 -> java version "9.0.4" + # 10 -> java version "10" 2018-03-20 + # 10.0.1 -> java version "10.0.1" 2018-04-17 + # We need to match to the end of the line to prevent sed from printing the characters that do not match + JAVA_MAJOR_VERSION=$("$JAVA" -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p') + if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then + KAFKA_GC_LOG_OPTS="-Xlog:gc*:file=$LOG_DIR/$GC_LOG_FILE_NAME:time,tags:filecount=10,filesize=100M" + else + KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M" + fi +fi + +# Remove a possible colon prefix from the classpath (happens at lines like `CLASSPATH="$CLASSPATH:$file"` when CLASSPATH is blank) +# Syntax used on the right side is native Bash string manipulation; for more details see +# http://tldp.org/LDP/abs/html/string-manipulation.html, specifically the section titled "Substring Removal" +CLASSPATH=${CLASSPATH#:} + +# If Cygwin is detected, classpath is converted to Windows format. +(( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}") + +# Launch mode +if [ "x$DAEMON_MODE" = "xtrue" ]; then + nohup "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null & +else + exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" +fi diff --git a/bin/kafka-server-start.sh b/bin/kafka-server-start.sh new file mode 100755 index 0000000..5a53126 --- /dev/null +++ b/bin/kafka-server-start.sh @@ -0,0 +1,44 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] server.properties [--override property=value]*" + exit 1 +fi +base_dir=$(dirname $0) + +if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then + export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" +fi + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" +fi + +EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'} + +COMMAND=$1 +case $COMMAND in + -daemon) + EXTRA_ARGS="-daemon "$EXTRA_ARGS + shift + ;; + *) + ;; +esac + +exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" diff --git a/bin/kafka-server-stop.sh b/bin/kafka-server-stop.sh new file mode 100755 index 0000000..437189f --- /dev/null +++ b/bin/kafka-server-stop.sh @@ -0,0 +1,35 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +SIGNAL=${SIGNAL:-TERM} + +OSNAME=$(uname -s) +if [[ "$OSNAME" == "OS/390" ]]; then + if [ -z $JOBNAME ]; then + JOBNAME="KAFKSTRT" + fi + PIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk '{print $1}') +elif [[ "$OSNAME" == "OS400" ]]; then + PIDS=$(ps -Af | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $2}') +else + PIDS=$(ps ax | grep ' kafka\.Kafka ' | grep java | grep -v grep | awk '{print $1}') +fi + +if [ -z "$PIDS" ]; then + echo "No kafka server to stop" + exit 1 +else + kill -s $SIGNAL $PIDS +fi diff --git a/bin/kafka-storage.sh b/bin/kafka-storage.sh new file mode 100755 index 0000000..eef9342 --- /dev/null +++ b/bin/kafka-storage.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.tools.StorageTool "$@" diff --git a/bin/kafka-streams-application-reset.sh b/bin/kafka-streams-application-reset.sh new file mode 100755 index 0000000..26ab766 --- /dev/null +++ b/bin/kafka-streams-application-reset.sh @@ -0,0 +1,21 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.StreamsResetter "$@" diff --git a/bin/kafka-topics.sh b/bin/kafka-topics.sh new file mode 100755 index 0000000..ad6a2d4 --- /dev/null +++ b/bin/kafka-topics.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@" diff --git a/bin/kafka-transactions.sh b/bin/kafka-transactions.sh new file mode 100755 index 0000000..6fb5233 --- /dev/null +++ b/bin/kafka-transactions.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.TransactionsCommand "$@" diff --git a/bin/kafka-verifiable-consumer.sh b/bin/kafka-verifiable-consumer.sh new file mode 100755 index 0000000..852847d --- /dev/null +++ b/bin/kafka-verifiable-consumer.sh @@ -0,0 +1,20 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.VerifiableConsumer "$@" diff --git a/bin/kafka-verifiable-producer.sh b/bin/kafka-verifiable-producer.sh new file mode 100755 index 0000000..b59bae7 --- /dev/null +++ b/bin/kafka-verifiable-producer.sh @@ -0,0 +1,20 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M" +fi +exec $(dirname $0)/kafka-run-class.sh org.apache.kafka.tools.VerifiableProducer "$@" diff --git a/bin/trogdor.sh b/bin/trogdor.sh new file mode 100755 index 0000000..3324c4e --- /dev/null +++ b/bin/trogdor.sh @@ -0,0 +1,50 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +usage() { + cat <nul 2>&1 + IF NOT ERRORLEVEL 1 ( + rem 32-bit OS + set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M + ) ELSE ( + rem 64-bit OS + set KAFKA_HEAP_OPTS=-Xmx1G -Xms1G + ) +) +"%~dp0kafka-run-class.bat" kafka.Kafka %* +EndLocal diff --git a/bin/windows/kafka-server-stop.bat b/bin/windows/kafka-server-stop.bat new file mode 100644 index 0000000..676577c --- /dev/null +++ b/bin/windows/kafka-server-stop.bat @@ -0,0 +1,18 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +wmic process where (commandline like "%%kafka.Kafka%%" and not name="wmic.exe") delete +rem ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM diff --git a/bin/windows/kafka-storage.bat b/bin/windows/kafka-storage.bat new file mode 100644 index 0000000..4a0e458 --- /dev/null +++ b/bin/windows/kafka-storage.bat @@ -0,0 +1,17 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +"%~dp0kafka-run-class.bat" kafka.tools.StorageTool %* diff --git a/bin/windows/kafka-streams-application-reset.bat b/bin/windows/kafka-streams-application-reset.bat new file mode 100644 index 0000000..77ffc7d --- /dev/null +++ b/bin/windows/kafka-streams-application-reset.bat @@ -0,0 +1,23 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +SetLocal +IF ["%KAFKA_HEAP_OPTS%"] EQU [""] ( + set KAFKA_HEAP_OPTS=-Xmx512M +) + +"%~dp0kafka-run-class.bat" org.apache.kafka.tools.StreamsResetter %* +EndLocal diff --git a/bin/windows/kafka-topics.bat b/bin/windows/kafka-topics.bat new file mode 100644 index 0000000..677b09d --- /dev/null +++ b/bin/windows/kafka-topics.bat @@ -0,0 +1,17 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +"%~dp0kafka-run-class.bat" kafka.admin.TopicCommand %* diff --git a/bin/windows/kafka-transactions.bat b/bin/windows/kafka-transactions.bat new file mode 100644 index 0000000..9bb7585 --- /dev/null +++ b/bin/windows/kafka-transactions.bat @@ -0,0 +1,17 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +"%~dp0kafka-run-class.bat" org.apache.kafka.tools.TransactionsCommand %* diff --git a/bin/windows/zookeeper-server-start.bat b/bin/windows/zookeeper-server-start.bat new file mode 100644 index 0000000..f201a58 --- /dev/null +++ b/bin/windows/zookeeper-server-start.bat @@ -0,0 +1,30 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +IF [%1] EQU [] ( + echo USAGE: %0 zookeeper.properties + EXIT /B 1 +) + +SetLocal +IF ["%KAFKA_LOG4J_OPTS%"] EQU [""] ( + set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/log4j.properties +) +IF ["%KAFKA_HEAP_OPTS%"] EQU [""] ( + set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M +) +"%~dp0kafka-run-class.bat" org.apache.zookeeper.server.quorum.QuorumPeerMain %* +EndLocal diff --git a/bin/windows/zookeeper-server-stop.bat b/bin/windows/zookeeper-server-stop.bat new file mode 100644 index 0000000..8b57dd8 --- /dev/null +++ b/bin/windows/zookeeper-server-stop.bat @@ -0,0 +1,17 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +wmic process where (commandline like "%%zookeeper%%" and not name="wmic.exe") delete diff --git a/bin/windows/zookeeper-shell.bat b/bin/windows/zookeeper-shell.bat new file mode 100644 index 0000000..f1c86c4 --- /dev/null +++ b/bin/windows/zookeeper-shell.bat @@ -0,0 +1,22 @@ +@echo off +rem Licensed to the Apache Software Foundation (ASF) under one or more +rem contributor license agreements. See the NOTICE file distributed with +rem this work for additional information regarding copyright ownership. +rem The ASF licenses this file to You under the Apache License, Version 2.0 +rem (the "License"); you may not use this file except in compliance with +rem the License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +rem Unless required by applicable law or agreed to in writing, software +rem distributed under the License is distributed on an "AS IS" BASIS, +rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +rem See the License for the specific language governing permissions and +rem limitations under the License. + +IF [%1] EQU [] ( + echo USAGE: %0 zookeeper_host:port[/path] [-zk-tls-config-file file] [args...] + EXIT /B 1 +) + +"%~dp0kafka-run-class.bat" org.apache.zookeeper.ZooKeeperMainWithTlsSupportForKafka -server %* diff --git a/bin/zookeeper-security-migration.sh b/bin/zookeeper-security-migration.sh new file mode 100755 index 0000000..722bde7 --- /dev/null +++ b/bin/zookeeper-security-migration.sh @@ -0,0 +1,17 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +exec $(dirname $0)/kafka-run-class.sh kafka.admin.ZkSecurityMigrator "$@" diff --git a/bin/zookeeper-server-start.sh b/bin/zookeeper-server-start.sh new file mode 100755 index 0000000..bd9c114 --- /dev/null +++ b/bin/zookeeper-server-start.sh @@ -0,0 +1,44 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 [-daemon] zookeeper.properties" + exit 1 +fi +base_dir=$(dirname $0) + +if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then + export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" +fi + +if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then + export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M" +fi + +EXTRA_ARGS=${EXTRA_ARGS-'-name zookeeper -loggc'} + +COMMAND=$1 +case $COMMAND in + -daemon) + EXTRA_ARGS="-daemon "$EXTRA_ARGS + shift + ;; + *) + ;; +esac + +exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$@" diff --git a/bin/zookeeper-server-stop.sh b/bin/zookeeper-server-stop.sh new file mode 100755 index 0000000..11665f3 --- /dev/null +++ b/bin/zookeeper-server-stop.sh @@ -0,0 +1,35 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +SIGNAL=${SIGNAL:-TERM} + +OSNAME=$(uname -s) +if [[ "$OSNAME" == "OS/390" ]]; then + if [ -z $JOBNAME ]; then + JOBNAME="ZKEESTRT" + fi + PIDS=$(ps -A -o pid,jobname,comm | grep -i $JOBNAME | grep java | grep -v grep | awk '{print $1}') +elif [[ "$OSNAME" == "OS400" ]]; then + PIDS=$(ps -Af | grep java | grep -i QuorumPeerMain | grep -v grep | awk '{print $2}') +else + PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk '{print $1}') +fi + +if [ -z "$PIDS" ]; then + echo "No zookeeper server to stop" + exit 1 +else + kill -s $SIGNAL $PIDS +fi diff --git a/bin/zookeeper-shell.sh b/bin/zookeeper-shell.sh new file mode 100755 index 0000000..2f1d0f2 --- /dev/null +++ b/bin/zookeeper-shell.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ $# -lt 1 ]; +then + echo "USAGE: $0 zookeeper_host:port[/path] [-zk-tls-config-file file] [args...]" + exit 1 +fi + +exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMainWithTlsSupportForKafka -server "$@" diff --git a/config/connect-console-sink.properties b/config/connect-console-sink.properties new file mode 100644 index 0000000..e240a8f --- /dev/null +++ b/config/connect-console-sink.properties @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +name=local-console-sink +connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector +tasks.max=1 +topics=connect-test \ No newline at end of file diff --git a/config/connect-console-source.properties b/config/connect-console-source.properties new file mode 100644 index 0000000..d0e2069 --- /dev/null +++ b/config/connect-console-source.properties @@ -0,0 +1,19 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +name=local-console-source +connector.class=org.apache.kafka.connect.file.FileStreamSourceConnector +tasks.max=1 +topic=connect-test \ No newline at end of file diff --git a/config/connect-distributed.properties b/config/connect-distributed.properties new file mode 100644 index 0000000..cedad9a --- /dev/null +++ b/config/connect-distributed.properties @@ -0,0 +1,89 @@ +## +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +## + +# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended +# to be used with the examples, and some settings may differ from those used in a production system, especially +# the `bootstrap.servers` and those specifying replication factors. + +# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. +bootstrap.servers=localhost:9092 + +# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs +group.id=connect-cluster + +# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will +# need to configure these based on the format they want their data in when loaded from or stored into Kafka +key.converter=org.apache.kafka.connect.json.JsonConverter +value.converter=org.apache.kafka.connect.json.JsonConverter +# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply +# it to +key.converter.schemas.enable=true +value.converter.schemas.enable=true + +# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted. +# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create +# the topic before starting Kafka Connect if a specific topic configuration is needed. +# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. +# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able +# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. +offset.storage.topic=connect-offsets +offset.storage.replication.factor=1 +#offset.storage.partitions=25 + +# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, +# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create +# the topic before starting Kafka Connect if a specific topic configuration is needed. +# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. +# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able +# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. +config.storage.topic=connect-configs +config.storage.replication.factor=1 + +# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted. +# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create +# the topic before starting Kafka Connect if a specific topic configuration is needed. +# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. +# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able +# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. +status.storage.topic=connect-status +status.storage.replication.factor=1 +#status.storage.partitions=5 + +# Flush much faster than normal, which is useful for testing/debugging +offset.flush.interval.ms=10000 + +# List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. +# Specify hostname as 0.0.0.0 to bind to all interfaces. +# Leave hostname empty to bind to default interface. +# Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084" +#listeners=HTTP://:8083 + +# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers. +# If not set, it uses the value for "listeners" if configured. +#rest.advertised.host.name= +#rest.advertised.port= +#rest.advertised.listener= + +# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins +# (connectors, converters, transformations). The list should consist of top level directories that include +# any combination of: +# a) directories immediately containing jars with plugins and their dependencies +# b) uber-jars with plugins and their dependencies +# c) directories immediately containing the package directory structure of classes of plugins and their dependencies +# Examples: +# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, +#plugin.path= diff --git a/config/connect-file-sink.properties b/config/connect-file-sink.properties new file mode 100644 index 0000000..594ccc6 --- /dev/null +++ b/config/connect-file-sink.properties @@ -0,0 +1,20 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +name=local-file-sink +connector.class=FileStreamSink +tasks.max=1 +file=test.sink.txt +topics=connect-test \ No newline at end of file diff --git a/config/connect-file-source.properties b/config/connect-file-source.properties new file mode 100644 index 0000000..599cf4c --- /dev/null +++ b/config/connect-file-source.properties @@ -0,0 +1,20 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +name=local-file-source +connector.class=FileStreamSource +tasks.max=1 +file=test.txt +topic=connect-test \ No newline at end of file diff --git a/config/connect-log4j.properties b/config/connect-log4j.properties new file mode 100644 index 0000000..2e049a5 --- /dev/null +++ b/config/connect-log4j.properties @@ -0,0 +1,41 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +log4j.rootLogger=INFO, stdout, connectAppender + +# Send the logs to the console. +# +log4j.appender.stdout=org.apache.log4j.ConsoleAppender +log4j.appender.stdout.layout=org.apache.log4j.PatternLayout + +# Send the logs to a file, rolling the file at midnight local time. For example, the `File` option specifies the +# location of the log files (e.g. ${kafka.logs.dir}/connect.log), and at midnight local time the file is closed +# and copied in the same directory but with a filename that ends in the `DatePattern` option. +# +log4j.appender.connectAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.connectAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.connectAppender.File=${kafka.logs.dir}/connect.log +log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout + +# The `%X{connector.context}` parameter in the layout includes connector-specific and task-specific information +# in the log messages, where appropriate. This makes it easier to identify those log messages that apply to a +# specific connector. +# +connect.log.pattern=[%d] %p %X{connector.context}%m (%c:%L)%n + +log4j.appender.stdout.layout.ConversionPattern=${connect.log.pattern} +log4j.appender.connectAppender.layout.ConversionPattern=${connect.log.pattern} + +log4j.logger.org.reflections=ERROR diff --git a/config/connect-mirror-maker.properties b/config/connect-mirror-maker.properties new file mode 100644 index 0000000..40afda5 --- /dev/null +++ b/config/connect-mirror-maker.properties @@ -0,0 +1,59 @@ +# Licensed to the Apache Software Foundation (ASF) under A or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# see org.apache.kafka.clients.consumer.ConsumerConfig for more details + +# Sample MirrorMaker 2.0 top-level configuration file +# Run with ./bin/connect-mirror-maker.sh connect-mirror-maker.properties + +# specify any number of cluster aliases +clusters = A, B + +# connection information for each cluster +# This is a comma separated host:port pairs for each cluster +# for e.g. "A_host1:9092, A_host2:9092, A_host3:9092" +A.bootstrap.servers = A_host1:9092, A_host2:9092, A_host3:9092 +B.bootstrap.servers = B_host1:9092, B_host2:9092, B_host3:9092 + +# enable and configure individual replication flows +A->B.enabled = true + +# regex which defines which topics gets replicated. For eg "foo-.*" +A->B.topics = .* + +B->A.enabled = true +B->A.topics = .* + +# Setting replication factor of newly created remote topics +replication.factor=1 + +############################# Internal Topic Settings ############################# +# The replication factor for mm2 internal topics "heartbeats", "B.checkpoints.internal" and +# "mm2-offset-syncs.B.internal" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +checkpoints.topic.replication.factor=1 +heartbeats.topic.replication.factor=1 +offset-syncs.topic.replication.factor=1 + +# The replication factor for connect internal topics "mm2-configs.B.internal", "mm2-offsets.B.internal" and +# "mm2-status.B.internal" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +offset.storage.replication.factor=1 +status.storage.replication.factor=1 +config.storage.replication.factor=1 + +# customize as needed +# replication.policy.separator = _ +# sync.topic.acls.enabled = false +# emit.heartbeats.interval.seconds = 5 diff --git a/config/connect-standalone.properties b/config/connect-standalone.properties new file mode 100644 index 0000000..a340a3b --- /dev/null +++ b/config/connect-standalone.properties @@ -0,0 +1,41 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# These are defaults. This file just demonstrates how to override some settings. +bootstrap.servers=localhost:9092 + +# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will +# need to configure these based on the format they want their data in when loaded from or stored into Kafka +key.converter=org.apache.kafka.connect.json.JsonConverter +value.converter=org.apache.kafka.connect.json.JsonConverter +# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply +# it to +key.converter.schemas.enable=true +value.converter.schemas.enable=true + +offset.storage.file.filename=/tmp/connect.offsets +# Flush much faster than normal, which is useful for testing/debugging +offset.flush.interval.ms=10000 + +# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins +# (connectors, converters, transformations). The list should consist of top level directories that include +# any combination of: +# a) directories immediately containing jars with plugins and their dependencies +# b) uber-jars with plugins and their dependencies +# c) directories immediately containing the package directory structure of classes of plugins and their dependencies +# Note: symlinks will be followed to discover dependencies or plugins. +# Examples: +# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, +#plugin.path= diff --git a/config/consumer.properties b/config/consumer.properties new file mode 100644 index 0000000..01bb12e --- /dev/null +++ b/config/consumer.properties @@ -0,0 +1,26 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# see org.apache.kafka.clients.consumer.ConsumerConfig for more details + +# list of brokers used for bootstrapping knowledge about the rest of the cluster +# format: host1:port1,host2:port2 ... +bootstrap.servers=localhost:9092 + +# consumer group id +group.id=test-consumer-group + +# What to do when there is no initial offset in Kafka or if the current +# offset does not exist any more on the server: latest, earliest, none +#auto.offset.reset= diff --git a/config/kraft/broker.properties b/config/kraft/broker.properties new file mode 100644 index 0000000..2d15997 --- /dev/null +++ b/config/kraft/broker.properties @@ -0,0 +1,129 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# This configuration file is intended for use in KRaft mode, where +# Apache ZooKeeper is not present. +# + +############################# Server Basics ############################# + +# The role of this server. Setting this puts us in KRaft mode +process.roles=broker + +# The node id associated with this instance's roles +node.id=2 + +# The connect string for the controller quorum +controller.quorum.voters=1@localhost:9093 + +############################# Socket Server Settings ############################# + +# The address the socket server listens on. If not configured, the host name will be equal to the value of +# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092. +# FORMAT: +# listeners = listener_name://host_name:port +# EXAMPLE: +# listeners = PLAINTEXT://your.host.name:9092 +listeners=PLAINTEXT://localhost:9092 + +# Name of listener used for communication between brokers. +inter.broker.listener.name=PLAINTEXT + +# Listener name, hostname and port the broker will advertise to clients. +# If not set, it uses the value for "listeners". +advertised.listeners=PLAINTEXT://localhost:9092 + +# A comma-separated list of the names of the listeners used by the controller. +# This is required if running in KRaft mode. On a node with `process.roles=broker`, only the first listed listener will be used by the broker. +controller.listener.names=CONTROLLER + +# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details +listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + +# The number of threads that the server uses for receiving requests from the network and sending responses to the network +num.network.threads=3 + +# The number of threads that the server uses for processing requests, which may include disk I/O +num.io.threads=8 + +# The send buffer (SO_SNDBUF) used by the socket server +socket.send.buffer.bytes=102400 + +# The receive buffer (SO_RCVBUF) used by the socket server +socket.receive.buffer.bytes=102400 + +# The maximum size of a request that the socket server will accept (protection against OOM) +socket.request.max.bytes=104857600 + + +############################# Log Basics ############################# + +# A comma separated list of directories under which to store log files +log.dirs=/tmp/kraft-broker-logs + +# The default number of log partitions per topic. More partitions allow greater +# parallelism for consumption, but this will also result in more files across +# the brokers. +num.partitions=1 + +# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. +# This value is recommended to be increased for installations with data dirs located in RAID array. +num.recovery.threads.per.data.dir=1 + +############################# Internal Topic Settings ############################# +# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +offsets.topic.replication.factor=1 +transaction.state.log.replication.factor=1 +transaction.state.log.min.isr=1 + +############################# Log Flush Policy ############################# + +# Messages are immediately written to the filesystem but by default we only fsync() to sync +# the OS cache lazily. The following configurations control the flush of data to disk. +# There are a few important trade-offs here: +# 1. Durability: Unflushed data may be lost if you are not using replication. +# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. +# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. +# The settings below allow one to configure the flush policy to flush data after a period of time or +# every N messages (or both). This can be done globally and overridden on a per-topic basis. + +# The number of messages to accept before forcing a flush of data to disk +#log.flush.interval.messages=10000 + +# The maximum amount of time a message can sit in a log before we force a flush +#log.flush.interval.ms=1000 + +############################# Log Retention Policy ############################# + +# The following configurations control the disposal of log segments. The policy can +# be set to delete segments after a period of time, or after a given size has accumulated. +# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens +# from the end of the log. + +# The minimum age of a log file to be eligible for deletion due to age +log.retention.hours=168 + +# A size-based retention policy for logs. Segments are pruned from the log unless the remaining +# segments drop below log.retention.bytes. Functions independently of log.retention.hours. +#log.retention.bytes=1073741824 + +# The maximum size of a log segment file. When this size is reached a new log segment will be created. +log.segment.bytes=1073741824 + +# The interval at which log segments are checked to see if they can be deleted according +# to the retention policies +log.retention.check.interval.ms=300000 diff --git a/config/kraft/controller.properties b/config/kraft/controller.properties new file mode 100644 index 0000000..9d152f7 --- /dev/null +++ b/config/kraft/controller.properties @@ -0,0 +1,122 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# This configuration file is intended for use in KRaft mode, where +# Apache ZooKeeper is not present. +# + +############################# Server Basics ############################# + +# The role of this server. Setting this puts us in KRaft mode +process.roles=controller + +# The node id associated with this instance's roles +node.id=1 + +# The connect string for the controller quorum +controller.quorum.voters=1@localhost:9093 + +############################# Socket Server Settings ############################# + +# The address the socket server listens on. +# Note that only the controller listeners are allowed here when `process.roles=controller`, and this listener should be consistent with `controller.quorum.voters` value. +# FORMAT: +# listeners = listener_name://host_name:port +# EXAMPLE: +# listeners = PLAINTEXT://your.host.name:9092 +listeners=CONTROLLER://:9093 + +# A comma-separated list of the names of the listeners used by the controller. +# This is required if running in KRaft mode. +controller.listener.names=CONTROLLER + +# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details +#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + +# The number of threads that the server uses for receiving requests from the network and sending responses to the network +num.network.threads=3 + +# The number of threads that the server uses for processing requests, which may include disk I/O +num.io.threads=8 + +# The send buffer (SO_SNDBUF) used by the socket server +socket.send.buffer.bytes=102400 + +# The receive buffer (SO_RCVBUF) used by the socket server +socket.receive.buffer.bytes=102400 + +# The maximum size of a request that the socket server will accept (protection against OOM) +socket.request.max.bytes=104857600 + + +############################# Log Basics ############################# + +# A comma separated list of directories under which to store log files +log.dirs=/tmp/kraft-controller-logs + +# The default number of log partitions per topic. More partitions allow greater +# parallelism for consumption, but this will also result in more files across +# the brokers. +num.partitions=1 + +# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. +# This value is recommended to be increased for installations with data dirs located in RAID array. +num.recovery.threads.per.data.dir=1 + +############################# Internal Topic Settings ############################# +# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +offsets.topic.replication.factor=1 +transaction.state.log.replication.factor=1 +transaction.state.log.min.isr=1 + +############################# Log Flush Policy ############################# + +# Messages are immediately written to the filesystem but by default we only fsync() to sync +# the OS cache lazily. The following configurations control the flush of data to disk. +# There are a few important trade-offs here: +# 1. Durability: Unflushed data may be lost if you are not using replication. +# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. +# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. +# The settings below allow one to configure the flush policy to flush data after a period of time or +# every N messages (or both). This can be done globally and overridden on a per-topic basis. + +# The number of messages to accept before forcing a flush of data to disk +#log.flush.interval.messages=10000 + +# The maximum amount of time a message can sit in a log before we force a flush +#log.flush.interval.ms=1000 + +############################# Log Retention Policy ############################# + +# The following configurations control the disposal of log segments. The policy can +# be set to delete segments after a period of time, or after a given size has accumulated. +# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens +# from the end of the log. + +# The minimum age of a log file to be eligible for deletion due to age +log.retention.hours=168 + +# A size-based retention policy for logs. Segments are pruned from the log unless the remaining +# segments drop below log.retention.bytes. Functions independently of log.retention.hours. +#log.retention.bytes=1073741824 + +# The maximum size of a log segment file. When this size is reached a new log segment will be created. +log.segment.bytes=1073741824 + +# The interval at which log segments are checked to see if they can be deleted according +# to the retention policies +log.retention.check.interval.ms=300000 diff --git a/config/kraft/server.properties b/config/kraft/server.properties new file mode 100644 index 0000000..6461c98 --- /dev/null +++ b/config/kraft/server.properties @@ -0,0 +1,132 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# This configuration file is intended for use in KRaft mode, where +# Apache ZooKeeper is not present. +# + +############################# Server Basics ############################# + +# The role of this server. Setting this puts us in KRaft mode +process.roles=broker,controller + +# The node id associated with this instance's roles +node.id=1 + +# The connect string for the controller quorum +controller.quorum.voters=1@localhost:9093 + +############################# Socket Server Settings ############################# + +# The address the socket server listens on. +# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum. +# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(), +# with PLAINTEXT listener name, and port 9092. +# FORMAT: +# listeners = listener_name://host_name:port +# EXAMPLE: +# listeners = PLAINTEXT://your.host.name:9092 +listeners=PLAINTEXT://:9092,CONTROLLER://:9093 + +# Name of listener used for communication between brokers. +inter.broker.listener.name=PLAINTEXT + +# Listener name, hostname and port the broker will advertise to clients. +# If not set, it uses the value for "listeners". +advertised.listeners=PLAINTEXT://localhost:9092 + +# A comma-separated list of the names of the listeners used by the controller. +# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol +# This is required if running in KRaft mode. +controller.listener.names=CONTROLLER + +# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details +listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + +# The number of threads that the server uses for receiving requests from the network and sending responses to the network +num.network.threads=3 + +# The number of threads that the server uses for processing requests, which may include disk I/O +num.io.threads=8 + +# The send buffer (SO_SNDBUF) used by the socket server +socket.send.buffer.bytes=102400 + +# The receive buffer (SO_RCVBUF) used by the socket server +socket.receive.buffer.bytes=102400 + +# The maximum size of a request that the socket server will accept (protection against OOM) +socket.request.max.bytes=104857600 + + +############################# Log Basics ############################# + +# A comma separated list of directories under which to store log files +log.dirs=/tmp/kraft-combined-logs + +# The default number of log partitions per topic. More partitions allow greater +# parallelism for consumption, but this will also result in more files across +# the brokers. +num.partitions=1 + +# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. +# This value is recommended to be increased for installations with data dirs located in RAID array. +num.recovery.threads.per.data.dir=1 + +############################# Internal Topic Settings ############################# +# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +offsets.topic.replication.factor=1 +transaction.state.log.replication.factor=1 +transaction.state.log.min.isr=1 + +############################# Log Flush Policy ############################# + +# Messages are immediately written to the filesystem but by default we only fsync() to sync +# the OS cache lazily. The following configurations control the flush of data to disk. +# There are a few important trade-offs here: +# 1. Durability: Unflushed data may be lost if you are not using replication. +# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. +# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. +# The settings below allow one to configure the flush policy to flush data after a period of time or +# every N messages (or both). This can be done globally and overridden on a per-topic basis. + +# The number of messages to accept before forcing a flush of data to disk +#log.flush.interval.messages=10000 + +# The maximum amount of time a message can sit in a log before we force a flush +#log.flush.interval.ms=1000 + +############################# Log Retention Policy ############################# + +# The following configurations control the disposal of log segments. The policy can +# be set to delete segments after a period of time, or after a given size has accumulated. +# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens +# from the end of the log. + +# The minimum age of a log file to be eligible for deletion due to age +log.retention.hours=168 + +# A size-based retention policy for logs. Segments are pruned from the log unless the remaining +# segments drop below log.retention.bytes. Functions independently of log.retention.hours. +#log.retention.bytes=1073741824 + +# The maximum size of a log segment file. When this size is reached a new log segment will be created. +log.segment.bytes=1073741824 + +# The interval at which log segments are checked to see if they can be deleted according +# to the retention policies +log.retention.check.interval.ms=300000 diff --git a/config/log4j.properties b/config/log4j.properties new file mode 100644 index 0000000..4dbdd83 --- /dev/null +++ b/config/log4j.properties @@ -0,0 +1,96 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Unspecified loggers and loggers with additivity=true output to server.log and stdout +# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise +log4j.rootLogger=INFO, stdout, kafkaAppender + +log4j.appender.stdout=org.apache.log4j.ConsoleAppender +log4j.appender.stdout.layout=org.apache.log4j.PatternLayout +log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log +log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log +log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log +log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log +log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log +log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender +log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH +log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log +log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout +log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n + +# Change the line below to adjust ZK client logging +log4j.logger.org.apache.zookeeper=INFO + +# Change the two lines below to adjust the general broker logging level (output to server.log and stdout) +log4j.logger.kafka=INFO +log4j.logger.org.apache.kafka=INFO + +# Change to DEBUG or TRACE to enable request logging +log4j.logger.kafka.request.logger=WARN, requestAppender +log4j.additivity.kafka.request.logger=false + +# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output +# related to the handling of requests +#log4j.logger.kafka.network.Processor=TRACE, requestAppender +#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender +#log4j.additivity.kafka.server.KafkaApis=false +log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender +log4j.additivity.kafka.network.RequestChannel$=false + +# Change the line below to adjust KRaft mode controller logging +log4j.logger.org.apache.kafka.controller=INFO, controllerAppender +log4j.additivity.org.apache.kafka.controller=false + +# Change the line below to adjust ZK mode controller logging +log4j.logger.kafka.controller=TRACE, controllerAppender +log4j.additivity.kafka.controller=false + +log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender +log4j.additivity.kafka.log.LogCleaner=false + +log4j.logger.state.change.logger=INFO, stateChangeAppender +log4j.additivity.state.change.logger=false + +# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses +log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender +log4j.additivity.kafka.authorizer.logger=false + diff --git a/config/producer.properties b/config/producer.properties new file mode 100644 index 0000000..3a999e7 --- /dev/null +++ b/config/producer.properties @@ -0,0 +1,46 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# see org.apache.kafka.clients.producer.ProducerConfig for more details + +############################# Producer Basics ############################# + +# list of brokers used for bootstrapping knowledge about the rest of the cluster +# format: host1:port1,host2:port2 ... +bootstrap.servers=localhost:9092 + +# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd +compression.type=none + +# name of the partitioner class for partitioning records; +# The default uses "sticky" partitioning logic which spreads the load evenly between partitions, but improves throughput by attempting to fill the batches sent to each partition. +#partitioner.class= + +# the maximum amount of time the client will wait for the response of a request +#request.timeout.ms= + +# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for +#max.block.ms= + +# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together +#linger.ms= + +# the maximum size of a request in bytes +#max.request.size= + +# the default batch size in bytes when batching multiple records sent to a partition +#batch.size= + +# the total bytes of memory the producer can use to buffer records waiting to be sent to the server +#buffer.memory= diff --git a/config/server.properties b/config/server.properties new file mode 100644 index 0000000..21ba1c7 --- /dev/null +++ b/config/server.properties @@ -0,0 +1,138 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# +# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required. +# See kafka.server.KafkaConfig for additional details and defaults +# + +############################# Server Basics ############################# + +# The id of the broker. This must be set to a unique integer for each broker. +broker.id=0 + +############################# Socket Server Settings ############################# + +# The address the socket server listens on. If not configured, the host name will be equal to the value of +# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092. +# FORMAT: +# listeners = listener_name://host_name:port +# EXAMPLE: +# listeners = PLAINTEXT://your.host.name:9092 +#listeners=PLAINTEXT://:9092 + +# Listener name, hostname and port the broker will advertise to clients. +# If not set, it uses the value for "listeners". +#advertised.listeners=PLAINTEXT://your.host.name:9092 + +# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details +#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + +# The number of threads that the server uses for receiving requests from the network and sending responses to the network +num.network.threads=3 + +# The number of threads that the server uses for processing requests, which may include disk I/O +num.io.threads=8 + +# The send buffer (SO_SNDBUF) used by the socket server +socket.send.buffer.bytes=102400 + +# The receive buffer (SO_RCVBUF) used by the socket server +socket.receive.buffer.bytes=102400 + +# The maximum size of a request that the socket server will accept (protection against OOM) +socket.request.max.bytes=104857600 + + +############################# Log Basics ############################# + +# A comma separated list of directories under which to store log files +log.dirs=/tmp/kafka-logs + +# The default number of log partitions per topic. More partitions allow greater +# parallelism for consumption, but this will also result in more files across +# the brokers. +num.partitions=1 + +# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. +# This value is recommended to be increased for installations with data dirs located in RAID array. +num.recovery.threads.per.data.dir=1 + +############################# Internal Topic Settings ############################# +# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" +# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. +offsets.topic.replication.factor=1 +transaction.state.log.replication.factor=1 +transaction.state.log.min.isr=1 + +############################# Log Flush Policy ############################# + +# Messages are immediately written to the filesystem but by default we only fsync() to sync +# the OS cache lazily. The following configurations control the flush of data to disk. +# There are a few important trade-offs here: +# 1. Durability: Unflushed data may be lost if you are not using replication. +# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. +# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. +# The settings below allow one to configure the flush policy to flush data after a period of time or +# every N messages (or both). This can be done globally and overridden on a per-topic basis. + +# The number of messages to accept before forcing a flush of data to disk +#log.flush.interval.messages=10000 + +# The maximum amount of time a message can sit in a log before we force a flush +#log.flush.interval.ms=1000 + +############################# Log Retention Policy ############################# + +# The following configurations control the disposal of log segments. The policy can +# be set to delete segments after a period of time, or after a given size has accumulated. +# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens +# from the end of the log. + +# The minimum age of a log file to be eligible for deletion due to age +log.retention.hours=168 + +# A size-based retention policy for logs. Segments are pruned from the log unless the remaining +# segments drop below log.retention.bytes. Functions independently of log.retention.hours. +#log.retention.bytes=1073741824 + +# The maximum size of a log segment file. When this size is reached a new log segment will be created. +#log.segment.bytes=1073741824 + +# The interval at which log segments are checked to see if they can be deleted according +# to the retention policies +log.retention.check.interval.ms=300000 + +############################# Zookeeper ############################# + +# Zookeeper connection string (see zookeeper docs for details). +# This is a comma separated host:port pairs, each corresponding to a zk +# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". +# You can also append an optional chroot string to the urls to specify the +# root directory for all kafka znodes. +zookeeper.connect=localhost:2181 + +# Timeout in ms for connecting to zookeeper +zookeeper.connection.timeout.ms=18000 + + +############################# Group Coordinator Settings ############################# + +# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. +# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. +# The default value for this is 3 seconds. +# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. +# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. +group.initial.rebalance.delay.ms=0 diff --git a/config/tools-log4j.properties b/config/tools-log4j.properties new file mode 100644 index 0000000..b669a4e --- /dev/null +++ b/config/tools-log4j.properties @@ -0,0 +1,24 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +log4j.rootLogger=WARN, stderr + +log4j.appender.stderr=org.apache.log4j.ConsoleAppender +log4j.appender.stderr.layout=org.apache.log4j.PatternLayout +log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n +log4j.appender.stderr.Target=System.err + +# for connect-plugin-path +log4j.logger.org.reflections=ERROR diff --git a/config/trogdor.conf b/config/trogdor.conf new file mode 100644 index 0000000..320cbe7 --- /dev/null +++ b/config/trogdor.conf @@ -0,0 +1,25 @@ +{ + "_comment": [ + "Licensed to the Apache Software Foundation (ASF) under one or more", + "contributor license agreements. See the NOTICE file distributed with", + "this work for additional information regarding copyright ownership.", + "The ASF licenses this file to You under the Apache License, Version 2.0", + "(the \"License\"); you may not use this file except in compliance with", + "the License. You may obtain a copy of the License at", + "", + "http://www.apache.org/licenses/LICENSE-2.0", + "", + "Unless required by applicable law or agreed to in writing, software", + "distributed under the License is distributed on an \"AS IS\" BASIS,", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", + "See the License for the specific language governing permissions and", + "limitations under the License." + ], + "platform": "org.apache.kafka.trogdor.basic.BasicPlatform", "nodes": { + "node0": { + "hostname": "localhost", + "trogdor.agent.port": 8888, + "trogdor.coordinator.port": 8889 + } + } +} diff --git a/config/zookeeper.properties b/config/zookeeper.properties new file mode 100644 index 0000000..90f4332 --- /dev/null +++ b/config/zookeeper.properties @@ -0,0 +1,24 @@ +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# the directory where the snapshot is stored. +dataDir=/tmp/zookeeper +# the port at which the clients will connect +clientPort=2181 +# disable the per-ip limit on the number of connections since this is a non-production config +maxClientCnxns=0 +# Disable the adminserver by default to avoid port conflicts. +# Set the port to something non-conflicting if choosing to enable this +admin.enableServer=false +# admin.serverPort=8080 diff --git a/dotnet/gregorsamsa_consumer/Program.cs b/dotnet/gregorsamsa_consumer/Program.cs new file mode 100644 index 0000000..b96d92a --- /dev/null +++ b/dotnet/gregorsamsa_consumer/Program.cs @@ -0,0 +1,89 @@ +using System; +using System.Threading; +using Confluent.Kafka; +class Program +{ + + static public List> buildConsumers(string[] arStr) { + List> arCon = new(); + + var config = new ConsumerConfig + { + BootstrapServers = "localhost:9092", + GroupId = "test-group", + AutoOffsetReset = AutoOffsetReset.Earliest + }; + foreach (var str in arStr) { + var con = new ConsumerBuilder(config) + .SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}")) + .SetPartitionsAssignedHandler((c, partitions) => + { + Console.WriteLine($"Assigned partitions: {string.Join(", ", partitions)}"); + // return (IEnumerable)partitions; + }) + .SetPartitionsRevokedHandler((c, partitions) => + { + Console.WriteLine($"Revoked partitions: {string.Join(", ", partitions)}"); + }).Build(); + + con.Subscribe(str); + + arCon.Add(con); + } + return arCon; + +} + + + + static void Main(string[] args) + { + var config = new ConsumerConfig + { + BootstrapServers = "localhost:9092", + GroupId = "test-group", + AutoOffsetReset = AutoOffsetReset.Earliest + }; + var topics = new string[] { "test-topic" }; + var cancellationTokenSource = new CancellationTokenSource(); + var consumers = buildConsumers(topics); + Console.WriteLine($"Starting {consumers.Count} consumers..."); + try + { + foreach (var consumer in consumers) + { + var thread = new Thread(() => + { + try + { + while (!cancellationTokenSource.Token.IsCancellationRequested) + { + var message = consumer.Consume(cancellationTokenSource.Token); + Console.WriteLine($"Received message: {message.Value}, Partition: {message.Partition}, Offset: {message.Offset}"); + } + } + catch (OperationCanceledException) + { + Console.WriteLine($"Consumer thread canceled."); + } + finally + { + consumer.Close(); + } + }); + thread.Start(); + } + Console.WriteLine("Press any key to stop..."); + Console.ReadKey(); + cancellationTokenSource.Cancel(); + foreach (var consumer in consumers) + { + consumer.Close(); + } + } + catch (Exception ex) + { + Console.WriteLine($"Error: {ex.Message}"); + } + } +} diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/Confluent.Kafka.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/Confluent.Kafka.dll new file mode 100755 index 0000000..71aa504 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/Confluent.Kafka.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa new file mode 100755 index 0000000..b10a4d0 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.deps.json b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.deps.json new file mode 100644 index 0000000..fc10bec --- /dev/null +++ b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.deps.json @@ -0,0 +1,189 @@ +{ + "runtimeTarget": { + "name": ".NETCoreApp,Version=v7.0", + "signature": "" + }, + "compilationOptions": {}, + "targets": { + ".NETCoreApp,Version=v7.0": { + "gregorsamsa/1.0.0": { + "dependencies": { + "Confluent.Kafka": "2.3.0" + }, + "runtime": { + "gregorsamsa.dll": {} + } + }, + "Confluent.Kafka/2.3.0": { + "dependencies": { + "System.Memory": "4.5.0", + "librdkafka.redist": "2.3.0" + }, + "runtime": { + "lib/net6.0/Confluent.Kafka.dll": { + "assemblyVersion": "2.3.0.0", + "fileVersion": "2.3.0.0" + } + } + }, + "librdkafka.redist/2.3.0": { + "runtimeTargets": { + "runtimes/linux-arm64/native/librdkafka.so": { + "rid": "linux-arm64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/alpine-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/centos6-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/centos7-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/osx-arm64/native/librdkafka.dylib": { + "rid": "osx-arm64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/osx-x64/native/librdkafka.dylib": { + "rid": "osx-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libcrypto-3-x64.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libcurl.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/librdkafka.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/librdkafkacpp.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libssl-3-x64.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/msvcp140.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/vcruntime140.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/zlib1.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/zstd.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libcrypto-3.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libcurl.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/librdkafka.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/librdkafkacpp.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libssl-3.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/msvcp140.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/vcruntime140.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/zlib1.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/zstd.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + } + } + }, + "System.Memory/4.5.0": {} + } + }, + "libraries": { + "gregorsamsa/1.0.0": { + "type": "project", + "serviceable": false, + "sha512": "" + }, + "Confluent.Kafka/2.3.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-JSBXN/X7bBNS92bgZp82v1oT58kw9ndpKSGC5VgELeM/HgXUTssFkG3gEPEGd3cOIa5MMJSLe6+gYwzzjdAJPw==", + "path": "confluent.kafka/2.3.0", + "hashPath": "confluent.kafka.2.3.0.nupkg.sha512" + }, + "librdkafka.redist/2.3.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-pH5zFZ0S56Wl6UfRkmDJN2AjHlPdVxlTskncFnL27LLGQuuY2dAU8YrZBkduBOws4tURS2TaTp1aPsY3qeJ0bw==", + "path": "librdkafka.redist/2.3.0", + "hashPath": "librdkafka.redist.2.3.0.nupkg.sha512" + }, + "System.Memory/4.5.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-m0psCSpUxTGfvwyO0i03ajXVhgBqyXlibXz0Mo1dtKGjaHrXFLnuQ8rNBTmWRqbfRjr4eC6Wah4X5FfuFDu5og==", + "path": "system.memory/4.5.0", + "hashPath": "system.memory.4.5.0.nupkg.sha512" + } + } +} \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.dll new file mode 100644 index 0000000..b917df3 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.pdb b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.pdb new file mode 100644 index 0000000..b4373c9 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.pdb differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.runtimeconfig.json b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.runtimeconfig.json new file mode 100644 index 0000000..184be8b --- /dev/null +++ b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.runtimeconfig.json @@ -0,0 +1,9 @@ +{ + "runtimeOptions": { + "tfm": "net7.0", + "framework": { + "name": "Microsoft.NETCore.App", + "version": "7.0.0" + } + } +} \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so new file mode 100755 index 0000000..7df736c Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so new file mode 100755 index 0000000..1dc0302 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so new file mode 100755 index 0000000..e5c9351 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so new file mode 100755 index 0000000..bc7a4ae Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so new file mode 100755 index 0000000..e18f309 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib new file mode 100755 index 0000000..4e77fe7 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib new file mode 100755 index 0000000..b6c5376 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll new file mode 100755 index 0000000..23db0d3 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll new file mode 100755 index 0000000..9598671 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll new file mode 100755 index 0000000..cd3a71f Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll new file mode 100755 index 0000000..d7252f3 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll new file mode 100755 index 0000000..7429d8f Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll new file mode 100755 index 0000000..aace6c2 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll new file mode 100755 index 0000000..dfc38b3 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll new file mode 100755 index 0000000..d652f60 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll new file mode 100755 index 0000000..9c53b3f Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll new file mode 100755 index 0000000..64d2066 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll new file mode 100755 index 0000000..af84086 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll new file mode 100755 index 0000000..a49ac65 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll new file mode 100755 index 0000000..24858be Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll new file mode 100755 index 0000000..5a542fb Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll new file mode 100755 index 0000000..5baaab2 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll new file mode 100755 index 0000000..c10229c Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll new file mode 100755 index 0000000..76d9f11 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll differ diff --git a/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll new file mode 100755 index 0000000..e409ca6 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll differ diff --git a/dotnet/gregorsamsa_consumer/gregorsamsa.csproj b/dotnet/gregorsamsa_consumer/gregorsamsa.csproj new file mode 100644 index 0000000..0ab1639 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/gregorsamsa.csproj @@ -0,0 +1,14 @@ + + + + Exe + net7.0 + enable + enable + + + + + + + diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs new file mode 100644 index 0000000..d69481d --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETCoreApp,Version=v7.0", FrameworkDisplayName = ".NET 7.0")] diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/apphost b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/apphost new file mode 100755 index 0000000..b10a4d0 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/apphost differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfo.cs b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfo.cs new file mode 100644 index 0000000..1500276 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfo.cs @@ -0,0 +1,22 @@ +//------------------------------------------------------------------------------ +// +// This code was generated by a tool. +// +// Changes to this file may cause incorrect behavior and will be lost if +// the code is regenerated. +// +//------------------------------------------------------------------------------ + +using System; +using System.Reflection; + +[assembly: System.Reflection.AssemblyCompanyAttribute("gregorsamsa")] +[assembly: System.Reflection.AssemblyConfigurationAttribute("Debug")] +[assembly: System.Reflection.AssemblyFileVersionAttribute("1.0.0.0")] +[assembly: System.Reflection.AssemblyInformationalVersionAttribute("1.0.0")] +[assembly: System.Reflection.AssemblyProductAttribute("gregorsamsa")] +[assembly: System.Reflection.AssemblyTitleAttribute("gregorsamsa")] +[assembly: System.Reflection.AssemblyVersionAttribute("1.0.0.0")] + +// Generated by the MSBuild WriteCodeFragment class. + diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfoInputs.cache b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfoInputs.cache new file mode 100644 index 0000000..233f480 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfoInputs.cache @@ -0,0 +1 @@ +ef3e3820d41d6cd9814440f24b4af8a1135ee417 diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GeneratedMSBuildEditorConfig.editorconfig b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GeneratedMSBuildEditorConfig.editorconfig new file mode 100644 index 0000000..44f0c90 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GeneratedMSBuildEditorConfig.editorconfig @@ -0,0 +1,11 @@ +is_global = true +build_property.TargetFramework = net7.0 +build_property.TargetPlatformMinVersion = +build_property.UsingMicrosoftNETSdkWeb = +build_property.ProjectTypeGuids = +build_property.InvariantGlobalization = +build_property.PlatformNeutralAssembly = +build_property.EnforceExtendedAnalyzerRules = +build_property._SupportedPlatformList = Linux,macOS,Windows +build_property.RootNamespace = gregorsamsa +build_property.ProjectDir = /scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GlobalUsings.g.cs b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GlobalUsings.g.cs new file mode 100644 index 0000000..8578f3d --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GlobalUsings.g.cs @@ -0,0 +1,8 @@ +// +global using global::System; +global using global::System.Collections.Generic; +global using global::System.IO; +global using global::System.Linq; +global using global::System.Net.Http; +global using global::System.Threading; +global using global::System.Threading.Tasks; diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.assets.cache b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.assets.cache new file mode 100644 index 0000000..ae399d1 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.assets.cache differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.AssemblyReference.cache b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.AssemblyReference.cache new file mode 100644 index 0000000..c9e52a1 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.AssemblyReference.cache differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CopyComplete b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CopyComplete new file mode 100644 index 0000000..e69de29 diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CoreCompileInputs.cache b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CoreCompileInputs.cache new file mode 100644 index 0000000..130baa4 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CoreCompileInputs.cache @@ -0,0 +1 @@ +ae803ef1c48286dbe8e8ba58e1f13618e8c4a416 diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.FileListAbsolute.txt b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.FileListAbsolute.txt new file mode 100644 index 0000000..0b62fb0 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.FileListAbsolute.txt @@ -0,0 +1,84 @@ +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.csproj.AssemblyReference.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.GeneratedMSBuildEditorConfig.editorconfig +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.AssemblyInfoInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.AssemblyInfo.cs +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.csproj.CoreCompileInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/gregorsamsa +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/gregorsamsa.deps.json +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/gregorsamsa.runtimeconfig.json +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/gregorsamsa.pdb +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/Confluent.Kafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.csproj.CopyComplete +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/refint/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.pdb +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/gregorsamsa.genruntimeconfig.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa/obj/Debug/net7.0/ref/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.deps.json +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.runtimeconfig.json +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/gregorsamsa.pdb +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/Confluent.Kafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.AssemblyReference.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.GeneratedMSBuildEditorConfig.editorconfig +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfoInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.AssemblyInfo.cs +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CoreCompileInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.csproj.CopyComplete +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/refint/gregorsamsa.dll +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.pdb +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.genruntimeconfig.cache +/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/ref/gregorsamsa.dll diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.dll b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.dll new file mode 100644 index 0000000..b917df3 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.dll differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.genruntimeconfig.cache b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.genruntimeconfig.cache new file mode 100644 index 0000000..5bad35b --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.genruntimeconfig.cache @@ -0,0 +1 @@ +c09921fd9e866869172c0a191156448888bf8e42 diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.pdb b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.pdb new file mode 100644 index 0000000..b4373c9 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/gregorsamsa.pdb differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/ref/gregorsamsa.dll b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/ref/gregorsamsa.dll new file mode 100644 index 0000000..a7516d6 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/ref/gregorsamsa.dll differ diff --git a/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/refint/gregorsamsa.dll b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/refint/gregorsamsa.dll new file mode 100644 index 0000000..a7516d6 Binary files /dev/null and b/dotnet/gregorsamsa_consumer/obj/Debug/net7.0/refint/gregorsamsa.dll differ diff --git a/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.dgspec.json b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.dgspec.json new file mode 100644 index 0000000..264c5bd --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.dgspec.json @@ -0,0 +1,67 @@ +{ + "format": 1, + "restore": { + "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj": {} + }, + "projects": { + "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj": { + "version": "1.0.0", + "restore": { + "projectUniqueName": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj", + "projectName": "gregorsamsa", + "projectPath": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj", + "packagesPath": "/home/memartel/.nuget/packages/", + "outputPath": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/", + "projectStyle": "PackageReference", + "configFilePaths": [ + "/home/memartel/.nuget/NuGet/NuGet.Config" + ], + "originalTargetFrameworks": [ + "net7.0" + ], + "sources": { + "https://api.nuget.org/v3/index.json": {} + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "projectReferences": {} + } + }, + "warningProperties": { + "warnAsError": [ + "NU1605" + ] + } + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "dependencies": { + "Confluent.Kafka": { + "target": "Package", + "version": "[2.3.0, )" + } + }, + "imports": [ + "net461", + "net462", + "net47", + "net471", + "net472", + "net48", + "net481" + ], + "assetTargetFallback": true, + "warn": true, + "frameworkReferences": { + "Microsoft.NETCore.App": { + "privateAssets": "all" + } + }, + "runtimeIdentifierGraphPath": "/opt/dotnet-sdk-bin-7.0/sdk/7.0.401/RuntimeIdentifierGraph.json" + } + } + } + } +} \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.props b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.props new file mode 100644 index 0000000..5940c00 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.props @@ -0,0 +1,15 @@ + + + + True + NuGet + $(MSBuildThisFileDirectory)project.assets.json + /home/memartel/.nuget/packages/ + /home/memartel/.nuget/packages/ + PackageReference + 6.7.0 + + + + + \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.targets b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.targets new file mode 100644 index 0000000..3dc06ef --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/gregorsamsa.csproj.nuget.g.targets @@ -0,0 +1,2 @@ + + \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/obj/project.assets.json b/dotnet/gregorsamsa_consumer/obj/project.assets.json new file mode 100644 index 0000000..9b6ed43 --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/project.assets.json @@ -0,0 +1,316 @@ +{ + "version": 3, + "targets": { + "net7.0": { + "Confluent.Kafka/2.3.0": { + "type": "package", + "dependencies": { + "System.Memory": "4.5.0", + "librdkafka.redist": "2.3.0" + }, + "compile": { + "lib/net6.0/Confluent.Kafka.dll": { + "related": ".xml" + } + }, + "runtime": { + "lib/net6.0/Confluent.Kafka.dll": { + "related": ".xml" + } + } + }, + "librdkafka.redist/2.3.0": { + "type": "package", + "build": { + "build/_._": {} + }, + "runtimeTargets": { + "runtimes/linux-arm64/native/librdkafka.so": { + "assetType": "native", + "rid": "linux-arm64" + }, + "runtimes/linux-x64/native/alpine-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/centos6-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/centos7-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/osx-arm64/native/librdkafka.dylib": { + "assetType": "native", + "rid": "osx-arm64" + }, + "runtimes/osx-x64/native/librdkafka.dylib": { + "assetType": "native", + "rid": "osx-x64" + }, + "runtimes/win-x64/native/libcrypto-3-x64.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/libcurl.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/librdkafka.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/librdkafkacpp.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/libssl-3-x64.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/msvcp140.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/vcruntime140.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/zlib1.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/zstd.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x86/native/libcrypto-3.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/libcurl.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/librdkafka.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/librdkafkacpp.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/libssl-3.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/msvcp140.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/vcruntime140.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/zlib1.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/zstd.dll": { + "assetType": "native", + "rid": "win-x86" + } + } + }, + "System.Memory/4.5.0": { + "type": "package", + "compile": { + "ref/netcoreapp2.1/_._": {} + }, + "runtime": { + "lib/netcoreapp2.1/_._": {} + } + } + } + }, + "libraries": { + "Confluent.Kafka/2.3.0": { + "sha512": "JSBXN/X7bBNS92bgZp82v1oT58kw9ndpKSGC5VgELeM/HgXUTssFkG3gEPEGd3cOIa5MMJSLe6+gYwzzjdAJPw==", + "type": "package", + "path": "confluent.kafka/2.3.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "confluent.kafka.2.3.0.nupkg.sha512", + "confluent.kafka.nuspec", + "lib/net462/Confluent.Kafka.dll", + "lib/net462/Confluent.Kafka.xml", + "lib/net6.0/Confluent.Kafka.dll", + "lib/net6.0/Confluent.Kafka.xml", + "lib/netstandard1.3/Confluent.Kafka.dll", + "lib/netstandard1.3/Confluent.Kafka.xml", + "lib/netstandard2.0/Confluent.Kafka.dll", + "lib/netstandard2.0/Confluent.Kafka.xml" + ] + }, + "librdkafka.redist/2.3.0": { + "sha512": "pH5zFZ0S56Wl6UfRkmDJN2AjHlPdVxlTskncFnL27LLGQuuY2dAU8YrZBkduBOws4tURS2TaTp1aPsY3qeJ0bw==", + "type": "package", + "path": "librdkafka.redist/2.3.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "CONFIGURATION.md", + "LICENSES.txt", + "README.md", + "build/librdkafka.redist.props", + "build/native/include/librdkafka/rdkafka.h", + "build/native/include/librdkafka/rdkafka_mock.h", + "build/native/include/librdkafka/rdkafkacpp.h", + "build/native/lib/win/x64/win-x64-Release/v142/librdkafka.lib", + "build/native/lib/win/x64/win-x64-Release/v142/librdkafkacpp.lib", + "build/native/lib/win/x86/win-x86-Release/v142/librdkafka.lib", + "build/native/lib/win/x86/win-x86-Release/v142/librdkafkacpp.lib", + "build/native/librdkafka.redist.targets", + "librdkafka.redist.2.3.0.nupkg.sha512", + "librdkafka.redist.nuspec", + "runtimes/linux-arm64/native/librdkafka.so", + "runtimes/linux-x64/native/alpine-librdkafka.so", + "runtimes/linux-x64/native/centos6-librdkafka.so", + "runtimes/linux-x64/native/centos7-librdkafka.so", + "runtimes/linux-x64/native/librdkafka.so", + "runtimes/osx-arm64/native/librdkafka.dylib", + "runtimes/osx-x64/native/librdkafka.dylib", + "runtimes/win-x64/native/libcrypto-3-x64.dll", + "runtimes/win-x64/native/libcurl.dll", + "runtimes/win-x64/native/librdkafka.dll", + "runtimes/win-x64/native/librdkafkacpp.dll", + "runtimes/win-x64/native/libssl-3-x64.dll", + "runtimes/win-x64/native/msvcp140.dll", + "runtimes/win-x64/native/vcruntime140.dll", + "runtimes/win-x64/native/zlib1.dll", + "runtimes/win-x64/native/zstd.dll", + "runtimes/win-x86/native/libcrypto-3.dll", + "runtimes/win-x86/native/libcurl.dll", + "runtimes/win-x86/native/librdkafka.dll", + "runtimes/win-x86/native/librdkafkacpp.dll", + "runtimes/win-x86/native/libssl-3.dll", + "runtimes/win-x86/native/msvcp140.dll", + "runtimes/win-x86/native/vcruntime140.dll", + "runtimes/win-x86/native/zlib1.dll", + "runtimes/win-x86/native/zstd.dll" + ] + }, + "System.Memory/4.5.0": { + "sha512": "m0psCSpUxTGfvwyO0i03ajXVhgBqyXlibXz0Mo1dtKGjaHrXFLnuQ8rNBTmWRqbfRjr4eC6Wah4X5FfuFDu5og==", + "type": "package", + "path": "system.memory/4.5.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "LICENSE.TXT", + "THIRD-PARTY-NOTICES.TXT", + "lib/MonoAndroid10/_._", + "lib/MonoTouch10/_._", + "lib/netcoreapp2.1/_._", + "lib/netstandard1.1/System.Memory.dll", + "lib/netstandard1.1/System.Memory.xml", + "lib/netstandard2.0/System.Memory.dll", + "lib/netstandard2.0/System.Memory.xml", + "lib/uap10.0.16300/_._", + "lib/xamarinios10/_._", + "lib/xamarinmac20/_._", + "lib/xamarintvos10/_._", + "lib/xamarinwatchos10/_._", + "ref/MonoAndroid10/_._", + "ref/MonoTouch10/_._", + "ref/netcoreapp2.1/_._", + "ref/netstandard1.1/System.Memory.dll", + "ref/netstandard1.1/System.Memory.xml", + "ref/netstandard2.0/System.Memory.dll", + "ref/netstandard2.0/System.Memory.xml", + "ref/uap10.0.16300/_._", + "ref/xamarinios10/_._", + "ref/xamarinmac20/_._", + "ref/xamarintvos10/_._", + "ref/xamarinwatchos10/_._", + "system.memory.4.5.0.nupkg.sha512", + "system.memory.nuspec", + "useSharedDesignerContext.txt", + "version.txt" + ] + } + }, + "projectFileDependencyGroups": { + "net7.0": [ + "Confluent.Kafka >= 2.3.0" + ] + }, + "packageFolders": { + "/home/memartel/.nuget/packages/": {} + }, + "project": { + "version": "1.0.0", + "restore": { + "projectUniqueName": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj", + "projectName": "gregorsamsa", + "projectPath": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj", + "packagesPath": "/home/memartel/.nuget/packages/", + "outputPath": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/obj/", + "projectStyle": "PackageReference", + "configFilePaths": [ + "/home/memartel/.nuget/NuGet/NuGet.Config" + ], + "originalTargetFrameworks": [ + "net7.0" + ], + "sources": { + "https://api.nuget.org/v3/index.json": {} + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "projectReferences": {} + } + }, + "warningProperties": { + "warnAsError": [ + "NU1605" + ] + } + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "dependencies": { + "Confluent.Kafka": { + "target": "Package", + "version": "[2.3.0, )" + } + }, + "imports": [ + "net461", + "net462", + "net47", + "net471", + "net472", + "net48", + "net481" + ], + "assetTargetFallback": true, + "warn": true, + "frameworkReferences": { + "Microsoft.NETCore.App": { + "privateAssets": "all" + } + }, + "runtimeIdentifierGraphPath": "/opt/dotnet-sdk-bin-7.0/sdk/7.0.401/RuntimeIdentifierGraph.json" + } + } + } +} \ No newline at end of file diff --git a/dotnet/gregorsamsa_consumer/obj/project.nuget.cache b/dotnet/gregorsamsa_consumer/obj/project.nuget.cache new file mode 100644 index 0000000..03b08dd --- /dev/null +++ b/dotnet/gregorsamsa_consumer/obj/project.nuget.cache @@ -0,0 +1,12 @@ +{ + "version": 2, + "dgSpecHash": "pyLdCwzQ+68DYzRVXElbdtUJMBKh05rDrrcLfnczref/qLGRmiwGb53x6cU2mEncV9TIRRruJ3rC+SIlQlp11w==", + "success": true, + "projectFilePath": "/scratch/kafka_2.13-3.6.0/dotnet/gregorsamsa_consumer/gregorsamsa.csproj", + "expectedPackageFiles": [ + "/home/memartel/.nuget/packages/confluent.kafka/2.3.0/confluent.kafka.2.3.0.nupkg.sha512", + "/home/memartel/.nuget/packages/librdkafka.redist/2.3.0/librdkafka.redist.2.3.0.nupkg.sha512", + "/home/memartel/.nuget/packages/system.memory/4.5.0/system.memory.4.5.0.nupkg.sha512" + ], + "logs": [] +} \ No newline at end of file diff --git a/dotnet/josefk_producer/Program.cs b/dotnet/josefk_producer/Program.cs new file mode 100644 index 0000000..952f329 --- /dev/null +++ b/dotnet/josefk_producer/Program.cs @@ -0,0 +1,21 @@ +using System; +using Confluent.Kafka; +class Program +{ + static void Main(string[] args) + { + var config = new ProducerConfig + { + BootstrapServers = "localhost:9092" + }; + using var producer = new ProducerBuilder(config).Build(); + + var topic = "test-topic"; + var message = new Message { Value = "Hello, Kafka!" }; + producer.Produce(topic, message, deliveryReport => { + Console.WriteLine(deliveryReport.Message.Value); + }); + producer.Flush(new TimeSpan(0,0,30)); + + } +} diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/Confluent.Kafka.dll b/dotnet/josefk_producer/bin/Debug/net7.0/Confluent.Kafka.dll new file mode 100755 index 0000000..71aa504 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/Confluent.Kafka.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/josefk b/dotnet/josefk_producer/bin/Debug/net7.0/josefk new file mode 100755 index 0000000..195002c Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/josefk differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/josefk.deps.json b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.deps.json new file mode 100644 index 0000000..e401f48 --- /dev/null +++ b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.deps.json @@ -0,0 +1,189 @@ +{ + "runtimeTarget": { + "name": ".NETCoreApp,Version=v7.0", + "signature": "" + }, + "compilationOptions": {}, + "targets": { + ".NETCoreApp,Version=v7.0": { + "josefk/1.0.0": { + "dependencies": { + "Confluent.Kafka": "2.3.0" + }, + "runtime": { + "josefk.dll": {} + } + }, + "Confluent.Kafka/2.3.0": { + "dependencies": { + "System.Memory": "4.5.0", + "librdkafka.redist": "2.3.0" + }, + "runtime": { + "lib/net6.0/Confluent.Kafka.dll": { + "assemblyVersion": "2.3.0.0", + "fileVersion": "2.3.0.0" + } + } + }, + "librdkafka.redist/2.3.0": { + "runtimeTargets": { + "runtimes/linux-arm64/native/librdkafka.so": { + "rid": "linux-arm64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/alpine-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/centos6-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/centos7-librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/linux-x64/native/librdkafka.so": { + "rid": "linux-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/osx-arm64/native/librdkafka.dylib": { + "rid": "osx-arm64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/osx-x64/native/librdkafka.dylib": { + "rid": "osx-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libcrypto-3-x64.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libcurl.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/librdkafka.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/librdkafkacpp.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/libssl-3-x64.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/msvcp140.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/vcruntime140.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/zlib1.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x64/native/zstd.dll": { + "rid": "win-x64", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libcrypto-3.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libcurl.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/librdkafka.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/librdkafkacpp.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/libssl-3.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/msvcp140.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/vcruntime140.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/zlib1.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + }, + "runtimes/win-x86/native/zstd.dll": { + "rid": "win-x86", + "assetType": "native", + "fileVersion": "0.0.0.0" + } + } + }, + "System.Memory/4.5.0": {} + } + }, + "libraries": { + "josefk/1.0.0": { + "type": "project", + "serviceable": false, + "sha512": "" + }, + "Confluent.Kafka/2.3.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-JSBXN/X7bBNS92bgZp82v1oT58kw9ndpKSGC5VgELeM/HgXUTssFkG3gEPEGd3cOIa5MMJSLe6+gYwzzjdAJPw==", + "path": "confluent.kafka/2.3.0", + "hashPath": "confluent.kafka.2.3.0.nupkg.sha512" + }, + "librdkafka.redist/2.3.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-pH5zFZ0S56Wl6UfRkmDJN2AjHlPdVxlTskncFnL27LLGQuuY2dAU8YrZBkduBOws4tURS2TaTp1aPsY3qeJ0bw==", + "path": "librdkafka.redist/2.3.0", + "hashPath": "librdkafka.redist.2.3.0.nupkg.sha512" + }, + "System.Memory/4.5.0": { + "type": "package", + "serviceable": true, + "sha512": "sha512-m0psCSpUxTGfvwyO0i03ajXVhgBqyXlibXz0Mo1dtKGjaHrXFLnuQ8rNBTmWRqbfRjr4eC6Wah4X5FfuFDu5og==", + "path": "system.memory/4.5.0", + "hashPath": "system.memory.4.5.0.nupkg.sha512" + } + } +} \ No newline at end of file diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/josefk.dll b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.dll new file mode 100644 index 0000000..26872f9 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/josefk.pdb b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.pdb new file mode 100644 index 0000000..871e07f Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.pdb differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/josefk.runtimeconfig.json b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.runtimeconfig.json new file mode 100644 index 0000000..184be8b --- /dev/null +++ b/dotnet/josefk_producer/bin/Debug/net7.0/josefk.runtimeconfig.json @@ -0,0 +1,9 @@ +{ + "runtimeOptions": { + "tfm": "net7.0", + "framework": { + "name": "Microsoft.NETCore.App", + "version": "7.0.0" + } + } +} \ No newline at end of file diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so new file mode 100755 index 0000000..7df736c Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so new file mode 100755 index 0000000..1dc0302 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so new file mode 100755 index 0000000..e5c9351 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so new file mode 100755 index 0000000..bc7a4ae Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so new file mode 100755 index 0000000..e18f309 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib new file mode 100755 index 0000000..4e77fe7 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib new file mode 100755 index 0000000..b6c5376 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll new file mode 100755 index 0000000..23db0d3 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll new file mode 100755 index 0000000..9598671 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll new file mode 100755 index 0000000..cd3a71f Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll new file mode 100755 index 0000000..d7252f3 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll new file mode 100755 index 0000000..7429d8f Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll new file mode 100755 index 0000000..aace6c2 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll new file mode 100755 index 0000000..dfc38b3 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll new file mode 100755 index 0000000..d652f60 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll new file mode 100755 index 0000000..9c53b3f Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll new file mode 100755 index 0000000..64d2066 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll new file mode 100755 index 0000000..af84086 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll new file mode 100755 index 0000000..a49ac65 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll new file mode 100755 index 0000000..24858be Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll new file mode 100755 index 0000000..5a542fb Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll new file mode 100755 index 0000000..5baaab2 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll new file mode 100755 index 0000000..c10229c Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll new file mode 100755 index 0000000..76d9f11 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll differ diff --git a/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll new file mode 100755 index 0000000..e409ca6 Binary files /dev/null and b/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll differ diff --git a/dotnet/josefk_producer/josefk.csproj b/dotnet/josefk_producer/josefk.csproj new file mode 100644 index 0000000..0ab1639 --- /dev/null +++ b/dotnet/josefk_producer/josefk.csproj @@ -0,0 +1,14 @@ + + + + Exe + net7.0 + enable + enable + + + + + + + diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs b/dotnet/josefk_producer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs new file mode 100644 index 0000000..d69481d --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/.NETCoreApp,Version=v7.0.AssemblyAttributes.cs @@ -0,0 +1,4 @@ +// +using System; +using System.Reflection; +[assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETCoreApp,Version=v7.0", FrameworkDisplayName = ".NET 7.0")] diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/apphost b/dotnet/josefk_producer/obj/Debug/net7.0/apphost new file mode 100755 index 0000000..195002c Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/apphost differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfo.cs b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfo.cs new file mode 100644 index 0000000..7b925a2 --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfo.cs @@ -0,0 +1,22 @@ +//------------------------------------------------------------------------------ +// +// This code was generated by a tool. +// +// Changes to this file may cause incorrect behavior and will be lost if +// the code is regenerated. +// +//------------------------------------------------------------------------------ + +using System; +using System.Reflection; + +[assembly: System.Reflection.AssemblyCompanyAttribute("josefk")] +[assembly: System.Reflection.AssemblyConfigurationAttribute("Debug")] +[assembly: System.Reflection.AssemblyFileVersionAttribute("1.0.0.0")] +[assembly: System.Reflection.AssemblyInformationalVersionAttribute("1.0.0")] +[assembly: System.Reflection.AssemblyProductAttribute("josefk")] +[assembly: System.Reflection.AssemblyTitleAttribute("josefk")] +[assembly: System.Reflection.AssemblyVersionAttribute("1.0.0.0")] + +// Generated by the MSBuild WriteCodeFragment class. + diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfoInputs.cache b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfoInputs.cache new file mode 100644 index 0000000..bda2a45 --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfoInputs.cache @@ -0,0 +1 @@ +2fea50b85150297f445405e09991a376890eb7c0 diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GeneratedMSBuildEditorConfig.editorconfig b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GeneratedMSBuildEditorConfig.editorconfig new file mode 100644 index 0000000..1feaad4 --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GeneratedMSBuildEditorConfig.editorconfig @@ -0,0 +1,11 @@ +is_global = true +build_property.TargetFramework = net7.0 +build_property.TargetPlatformMinVersion = +build_property.UsingMicrosoftNETSdkWeb = +build_property.ProjectTypeGuids = +build_property.InvariantGlobalization = +build_property.PlatformNeutralAssembly = +build_property.EnforceExtendedAnalyzerRules = +build_property._SupportedPlatformList = Linux,macOS,Windows +build_property.RootNamespace = josefk +build_property.ProjectDir = /scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GlobalUsings.g.cs b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GlobalUsings.g.cs new file mode 100644 index 0000000..8578f3d --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GlobalUsings.g.cs @@ -0,0 +1,8 @@ +// +global using global::System; +global using global::System.Collections.Generic; +global using global::System.IO; +global using global::System.Linq; +global using global::System.Net.Http; +global using global::System.Threading; +global using global::System.Threading.Tasks; diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.assets.cache b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.assets.cache new file mode 100644 index 0000000..61c19ae Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.assets.cache differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.AssemblyReference.cache b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.AssemblyReference.cache new file mode 100644 index 0000000..c9e52a1 Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.AssemblyReference.cache differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CopyComplete b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CopyComplete new file mode 100644 index 0000000..e69de29 diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CoreCompileInputs.cache b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CoreCompileInputs.cache new file mode 100644 index 0000000..b7dcd22 --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CoreCompileInputs.cache @@ -0,0 +1 @@ +05e0be4c2d12ae0f17c5d6f12962bce0cc2e9e72 diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.FileListAbsolute.txt b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.FileListAbsolute.txt new file mode 100644 index 0000000..65dd3fd --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.FileListAbsolute.txt @@ -0,0 +1,84 @@ +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.GeneratedMSBuildEditorConfig.editorconfig +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.AssemblyInfoInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.AssemblyInfo.cs +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.csproj.CoreCompileInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/josefk +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/josefk.deps.json +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/josefk.runtimeconfig.json +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/josefk.pdb +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/Confluent.Kafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.csproj.AssemblyReference.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.csproj.CopyComplete +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/refint/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.pdb +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/josefk.genruntimeconfig.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk/obj/Debug/net7.0/ref/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/josefk +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/josefk.deps.json +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/josefk.runtimeconfig.json +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/josefk.pdb +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/Confluent.Kafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-arm64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/alpine-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos6-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/centos7-librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/linux-x64/native/librdkafka.so +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-arm64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/osx-x64/native/librdkafka.dylib +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcrypto-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/libssl-3-x64.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x64/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcrypto-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libcurl.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafka.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/librdkafkacpp.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/libssl-3.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/msvcp140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/vcruntime140.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zlib1.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/bin/Debug/net7.0/runtimes/win-x86/native/zstd.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.AssemblyReference.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.GeneratedMSBuildEditorConfig.editorconfig +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfoInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.AssemblyInfo.cs +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CoreCompileInputs.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.csproj.CopyComplete +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/refint/josefk.dll +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.pdb +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/josefk.genruntimeconfig.cache +/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/Debug/net7.0/ref/josefk.dll diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.dll b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.dll new file mode 100644 index 0000000..26872f9 Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.dll differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.genruntimeconfig.cache b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.genruntimeconfig.cache new file mode 100644 index 0000000..692abb4 --- /dev/null +++ b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.genruntimeconfig.cache @@ -0,0 +1 @@ +0f98059d61801c6f6e9dce3a34d39d499ea9eb61 diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/josefk.pdb b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.pdb new file mode 100644 index 0000000..871e07f Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/josefk.pdb differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/ref/josefk.dll b/dotnet/josefk_producer/obj/Debug/net7.0/ref/josefk.dll new file mode 100644 index 0000000..74d9638 Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/ref/josefk.dll differ diff --git a/dotnet/josefk_producer/obj/Debug/net7.0/refint/josefk.dll b/dotnet/josefk_producer/obj/Debug/net7.0/refint/josefk.dll new file mode 100644 index 0000000..74d9638 Binary files /dev/null and b/dotnet/josefk_producer/obj/Debug/net7.0/refint/josefk.dll differ diff --git a/dotnet/josefk_producer/obj/josefk.csproj.nuget.dgspec.json b/dotnet/josefk_producer/obj/josefk.csproj.nuget.dgspec.json new file mode 100644 index 0000000..994e01b --- /dev/null +++ b/dotnet/josefk_producer/obj/josefk.csproj.nuget.dgspec.json @@ -0,0 +1,67 @@ +{ + "format": 1, + "restore": { + "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj": {} + }, + "projects": { + "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj": { + "version": "1.0.0", + "restore": { + "projectUniqueName": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj", + "projectName": "josefk", + "projectPath": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj", + "packagesPath": "/home/memartel/.nuget/packages/", + "outputPath": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/", + "projectStyle": "PackageReference", + "configFilePaths": [ + "/home/memartel/.nuget/NuGet/NuGet.Config" + ], + "originalTargetFrameworks": [ + "net7.0" + ], + "sources": { + "https://api.nuget.org/v3/index.json": {} + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "projectReferences": {} + } + }, + "warningProperties": { + "warnAsError": [ + "NU1605" + ] + } + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "dependencies": { + "Confluent.Kafka": { + "target": "Package", + "version": "[2.3.0, )" + } + }, + "imports": [ + "net461", + "net462", + "net47", + "net471", + "net472", + "net48", + "net481" + ], + "assetTargetFallback": true, + "warn": true, + "frameworkReferences": { + "Microsoft.NETCore.App": { + "privateAssets": "all" + } + }, + "runtimeIdentifierGraphPath": "/opt/dotnet-sdk-bin-7.0/sdk/7.0.401/RuntimeIdentifierGraph.json" + } + } + } + } +} \ No newline at end of file diff --git a/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.props b/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.props new file mode 100644 index 0000000..5940c00 --- /dev/null +++ b/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.props @@ -0,0 +1,15 @@ + + + + True + NuGet + $(MSBuildThisFileDirectory)project.assets.json + /home/memartel/.nuget/packages/ + /home/memartel/.nuget/packages/ + PackageReference + 6.7.0 + + + + + \ No newline at end of file diff --git a/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.targets b/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.targets new file mode 100644 index 0000000..3dc06ef --- /dev/null +++ b/dotnet/josefk_producer/obj/josefk.csproj.nuget.g.targets @@ -0,0 +1,2 @@ + + \ No newline at end of file diff --git a/dotnet/josefk_producer/obj/project.assets.json b/dotnet/josefk_producer/obj/project.assets.json new file mode 100644 index 0000000..1399b1a --- /dev/null +++ b/dotnet/josefk_producer/obj/project.assets.json @@ -0,0 +1,316 @@ +{ + "version": 3, + "targets": { + "net7.0": { + "Confluent.Kafka/2.3.0": { + "type": "package", + "dependencies": { + "System.Memory": "4.5.0", + "librdkafka.redist": "2.3.0" + }, + "compile": { + "lib/net6.0/Confluent.Kafka.dll": { + "related": ".xml" + } + }, + "runtime": { + "lib/net6.0/Confluent.Kafka.dll": { + "related": ".xml" + } + } + }, + "librdkafka.redist/2.3.0": { + "type": "package", + "build": { + "build/_._": {} + }, + "runtimeTargets": { + "runtimes/linux-arm64/native/librdkafka.so": { + "assetType": "native", + "rid": "linux-arm64" + }, + "runtimes/linux-x64/native/alpine-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/centos6-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/centos7-librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/linux-x64/native/librdkafka.so": { + "assetType": "native", + "rid": "linux-x64" + }, + "runtimes/osx-arm64/native/librdkafka.dylib": { + "assetType": "native", + "rid": "osx-arm64" + }, + "runtimes/osx-x64/native/librdkafka.dylib": { + "assetType": "native", + "rid": "osx-x64" + }, + "runtimes/win-x64/native/libcrypto-3-x64.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/libcurl.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/librdkafka.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/librdkafkacpp.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/libssl-3-x64.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/msvcp140.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/vcruntime140.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/zlib1.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x64/native/zstd.dll": { + "assetType": "native", + "rid": "win-x64" + }, + "runtimes/win-x86/native/libcrypto-3.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/libcurl.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/librdkafka.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/librdkafkacpp.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/libssl-3.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/msvcp140.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/vcruntime140.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/zlib1.dll": { + "assetType": "native", + "rid": "win-x86" + }, + "runtimes/win-x86/native/zstd.dll": { + "assetType": "native", + "rid": "win-x86" + } + } + }, + "System.Memory/4.5.0": { + "type": "package", + "compile": { + "ref/netcoreapp2.1/_._": {} + }, + "runtime": { + "lib/netcoreapp2.1/_._": {} + } + } + } + }, + "libraries": { + "Confluent.Kafka/2.3.0": { + "sha512": "JSBXN/X7bBNS92bgZp82v1oT58kw9ndpKSGC5VgELeM/HgXUTssFkG3gEPEGd3cOIa5MMJSLe6+gYwzzjdAJPw==", + "type": "package", + "path": "confluent.kafka/2.3.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "confluent.kafka.2.3.0.nupkg.sha512", + "confluent.kafka.nuspec", + "lib/net462/Confluent.Kafka.dll", + "lib/net462/Confluent.Kafka.xml", + "lib/net6.0/Confluent.Kafka.dll", + "lib/net6.0/Confluent.Kafka.xml", + "lib/netstandard1.3/Confluent.Kafka.dll", + "lib/netstandard1.3/Confluent.Kafka.xml", + "lib/netstandard2.0/Confluent.Kafka.dll", + "lib/netstandard2.0/Confluent.Kafka.xml" + ] + }, + "librdkafka.redist/2.3.0": { + "sha512": "pH5zFZ0S56Wl6UfRkmDJN2AjHlPdVxlTskncFnL27LLGQuuY2dAU8YrZBkduBOws4tURS2TaTp1aPsY3qeJ0bw==", + "type": "package", + "path": "librdkafka.redist/2.3.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "CONFIGURATION.md", + "LICENSES.txt", + "README.md", + "build/librdkafka.redist.props", + "build/native/include/librdkafka/rdkafka.h", + "build/native/include/librdkafka/rdkafka_mock.h", + "build/native/include/librdkafka/rdkafkacpp.h", + "build/native/lib/win/x64/win-x64-Release/v142/librdkafka.lib", + "build/native/lib/win/x64/win-x64-Release/v142/librdkafkacpp.lib", + "build/native/lib/win/x86/win-x86-Release/v142/librdkafka.lib", + "build/native/lib/win/x86/win-x86-Release/v142/librdkafkacpp.lib", + "build/native/librdkafka.redist.targets", + "librdkafka.redist.2.3.0.nupkg.sha512", + "librdkafka.redist.nuspec", + "runtimes/linux-arm64/native/librdkafka.so", + "runtimes/linux-x64/native/alpine-librdkafka.so", + "runtimes/linux-x64/native/centos6-librdkafka.so", + "runtimes/linux-x64/native/centos7-librdkafka.so", + "runtimes/linux-x64/native/librdkafka.so", + "runtimes/osx-arm64/native/librdkafka.dylib", + "runtimes/osx-x64/native/librdkafka.dylib", + "runtimes/win-x64/native/libcrypto-3-x64.dll", + "runtimes/win-x64/native/libcurl.dll", + "runtimes/win-x64/native/librdkafka.dll", + "runtimes/win-x64/native/librdkafkacpp.dll", + "runtimes/win-x64/native/libssl-3-x64.dll", + "runtimes/win-x64/native/msvcp140.dll", + "runtimes/win-x64/native/vcruntime140.dll", + "runtimes/win-x64/native/zlib1.dll", + "runtimes/win-x64/native/zstd.dll", + "runtimes/win-x86/native/libcrypto-3.dll", + "runtimes/win-x86/native/libcurl.dll", + "runtimes/win-x86/native/librdkafka.dll", + "runtimes/win-x86/native/librdkafkacpp.dll", + "runtimes/win-x86/native/libssl-3.dll", + "runtimes/win-x86/native/msvcp140.dll", + "runtimes/win-x86/native/vcruntime140.dll", + "runtimes/win-x86/native/zlib1.dll", + "runtimes/win-x86/native/zstd.dll" + ] + }, + "System.Memory/4.5.0": { + "sha512": "m0psCSpUxTGfvwyO0i03ajXVhgBqyXlibXz0Mo1dtKGjaHrXFLnuQ8rNBTmWRqbfRjr4eC6Wah4X5FfuFDu5og==", + "type": "package", + "path": "system.memory/4.5.0", + "files": [ + ".nupkg.metadata", + ".signature.p7s", + "LICENSE.TXT", + "THIRD-PARTY-NOTICES.TXT", + "lib/MonoAndroid10/_._", + "lib/MonoTouch10/_._", + "lib/netcoreapp2.1/_._", + "lib/netstandard1.1/System.Memory.dll", + "lib/netstandard1.1/System.Memory.xml", + "lib/netstandard2.0/System.Memory.dll", + "lib/netstandard2.0/System.Memory.xml", + "lib/uap10.0.16300/_._", + "lib/xamarinios10/_._", + "lib/xamarinmac20/_._", + "lib/xamarintvos10/_._", + "lib/xamarinwatchos10/_._", + "ref/MonoAndroid10/_._", + "ref/MonoTouch10/_._", + "ref/netcoreapp2.1/_._", + "ref/netstandard1.1/System.Memory.dll", + "ref/netstandard1.1/System.Memory.xml", + "ref/netstandard2.0/System.Memory.dll", + "ref/netstandard2.0/System.Memory.xml", + "ref/uap10.0.16300/_._", + "ref/xamarinios10/_._", + "ref/xamarinmac20/_._", + "ref/xamarintvos10/_._", + "ref/xamarinwatchos10/_._", + "system.memory.4.5.0.nupkg.sha512", + "system.memory.nuspec", + "useSharedDesignerContext.txt", + "version.txt" + ] + } + }, + "projectFileDependencyGroups": { + "net7.0": [ + "Confluent.Kafka >= 2.3.0" + ] + }, + "packageFolders": { + "/home/memartel/.nuget/packages/": {} + }, + "project": { + "version": "1.0.0", + "restore": { + "projectUniqueName": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj", + "projectName": "josefk", + "projectPath": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj", + "packagesPath": "/home/memartel/.nuget/packages/", + "outputPath": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/obj/", + "projectStyle": "PackageReference", + "configFilePaths": [ + "/home/memartel/.nuget/NuGet/NuGet.Config" + ], + "originalTargetFrameworks": [ + "net7.0" + ], + "sources": { + "https://api.nuget.org/v3/index.json": {} + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "projectReferences": {} + } + }, + "warningProperties": { + "warnAsError": [ + "NU1605" + ] + } + }, + "frameworks": { + "net7.0": { + "targetAlias": "net7.0", + "dependencies": { + "Confluent.Kafka": { + "target": "Package", + "version": "[2.3.0, )" + } + }, + "imports": [ + "net461", + "net462", + "net47", + "net471", + "net472", + "net48", + "net481" + ], + "assetTargetFallback": true, + "warn": true, + "frameworkReferences": { + "Microsoft.NETCore.App": { + "privateAssets": "all" + } + }, + "runtimeIdentifierGraphPath": "/opt/dotnet-sdk-bin-7.0/sdk/7.0.401/RuntimeIdentifierGraph.json" + } + } + } +} \ No newline at end of file diff --git a/dotnet/josefk_producer/obj/project.nuget.cache b/dotnet/josefk_producer/obj/project.nuget.cache new file mode 100644 index 0000000..5def007 --- /dev/null +++ b/dotnet/josefk_producer/obj/project.nuget.cache @@ -0,0 +1,12 @@ +{ + "version": 2, + "dgSpecHash": "xCvCoHprZRSKNvT90UWP0eGJQ0TmlqeHOrJXXeDMzuJ0CsOQQPPMi3A1RHoKTa0Y2/KOFEsN06JyTdkT1pKLPw==", + "success": true, + "projectFilePath": "/scratch/kafka_2.13-3.6.0/dotnet/josefk_producer/josefk.csproj", + "expectedPackageFiles": [ + "/home/memartel/.nuget/packages/confluent.kafka/2.3.0/confluent.kafka.2.3.0.nupkg.sha512", + "/home/memartel/.nuget/packages/librdkafka.redist/2.3.0/librdkafka.redist.2.3.0.nupkg.sha512", + "/home/memartel/.nuget/packages/system.memory/4.5.0/system.memory.4.5.0.nupkg.sha512" + ], + "logs": [] +} \ No newline at end of file diff --git a/libs/activation-1.1.1.jar b/libs/activation-1.1.1.jar new file mode 100644 index 0000000..1b703ab Binary files /dev/null and b/libs/activation-1.1.1.jar differ diff --git a/libs/aopalliance-repackaged-2.6.1.jar b/libs/aopalliance-repackaged-2.6.1.jar new file mode 100644 index 0000000..35502f0 Binary files /dev/null and b/libs/aopalliance-repackaged-2.6.1.jar differ diff --git a/libs/argparse4j-0.7.0.jar b/libs/argparse4j-0.7.0.jar new file mode 100644 index 0000000..b1865dd Binary files /dev/null and b/libs/argparse4j-0.7.0.jar differ diff --git a/libs/audience-annotations-0.12.0.jar b/libs/audience-annotations-0.12.0.jar new file mode 100644 index 0000000..4e76d80 Binary files /dev/null and b/libs/audience-annotations-0.12.0.jar differ diff --git a/libs/caffeine-2.9.3.jar b/libs/caffeine-2.9.3.jar new file mode 100644 index 0000000..2c85a0d Binary files /dev/null and b/libs/caffeine-2.9.3.jar differ diff --git a/libs/checker-qual-3.19.0.jar b/libs/checker-qual-3.19.0.jar new file mode 100644 index 0000000..3ff12ce Binary files /dev/null and b/libs/checker-qual-3.19.0.jar differ diff --git a/libs/commons-beanutils-1.9.4.jar b/libs/commons-beanutils-1.9.4.jar new file mode 100644 index 0000000..b73543c Binary files /dev/null and b/libs/commons-beanutils-1.9.4.jar differ diff --git a/libs/commons-cli-1.4.jar b/libs/commons-cli-1.4.jar new file mode 100644 index 0000000..22deb30 Binary files /dev/null and b/libs/commons-cli-1.4.jar differ diff --git a/libs/commons-collections-3.2.2.jar b/libs/commons-collections-3.2.2.jar new file mode 100644 index 0000000..fa5df82 Binary files /dev/null and b/libs/commons-collections-3.2.2.jar differ diff --git a/libs/commons-digester-2.1.jar b/libs/commons-digester-2.1.jar new file mode 100644 index 0000000..a07cfa8 Binary files /dev/null and b/libs/commons-digester-2.1.jar differ diff --git a/libs/commons-io-2.11.0.jar b/libs/commons-io-2.11.0.jar new file mode 100644 index 0000000..be507d9 Binary files /dev/null and b/libs/commons-io-2.11.0.jar differ diff --git a/libs/commons-lang3-3.8.1.jar b/libs/commons-lang3-3.8.1.jar new file mode 100644 index 0000000..2c65ce6 Binary files /dev/null and b/libs/commons-lang3-3.8.1.jar differ diff --git a/libs/commons-logging-1.2.jar b/libs/commons-logging-1.2.jar new file mode 100644 index 0000000..93a3b9f Binary files /dev/null and b/libs/commons-logging-1.2.jar differ diff --git a/libs/commons-validator-1.7.jar b/libs/commons-validator-1.7.jar new file mode 100644 index 0000000..f98c145 Binary files /dev/null and b/libs/commons-validator-1.7.jar differ diff --git a/libs/connect-api-3.6.0.jar b/libs/connect-api-3.6.0.jar new file mode 100644 index 0000000..c42a40e Binary files /dev/null and b/libs/connect-api-3.6.0.jar differ diff --git a/libs/connect-basic-auth-extension-3.6.0.jar b/libs/connect-basic-auth-extension-3.6.0.jar new file mode 100644 index 0000000..bb12742 Binary files /dev/null and b/libs/connect-basic-auth-extension-3.6.0.jar differ diff --git a/libs/connect-file-3.6.0.jar b/libs/connect-file-3.6.0.jar new file mode 100644 index 0000000..60cfb1c Binary files /dev/null and b/libs/connect-file-3.6.0.jar differ diff --git a/libs/connect-json-3.6.0.jar b/libs/connect-json-3.6.0.jar new file mode 100644 index 0000000..d1b87ea Binary files /dev/null and b/libs/connect-json-3.6.0.jar differ diff --git a/libs/connect-mirror-3.6.0.jar b/libs/connect-mirror-3.6.0.jar new file mode 100644 index 0000000..aac3652 Binary files /dev/null and b/libs/connect-mirror-3.6.0.jar differ diff --git a/libs/connect-mirror-client-3.6.0.jar b/libs/connect-mirror-client-3.6.0.jar new file mode 100644 index 0000000..8d136ee Binary files /dev/null and b/libs/connect-mirror-client-3.6.0.jar differ diff --git a/libs/connect-runtime-3.6.0.jar b/libs/connect-runtime-3.6.0.jar new file mode 100644 index 0000000..2930726 Binary files /dev/null and b/libs/connect-runtime-3.6.0.jar differ diff --git a/libs/connect-transforms-3.6.0.jar b/libs/connect-transforms-3.6.0.jar new file mode 100644 index 0000000..8c6645c Binary files /dev/null and b/libs/connect-transforms-3.6.0.jar differ diff --git a/libs/error_prone_annotations-2.10.0.jar b/libs/error_prone_annotations-2.10.0.jar new file mode 100644 index 0000000..2d1b543 Binary files /dev/null and b/libs/error_prone_annotations-2.10.0.jar differ diff --git a/libs/hk2-api-2.6.1.jar b/libs/hk2-api-2.6.1.jar new file mode 100644 index 0000000..03d6eb0 Binary files /dev/null and b/libs/hk2-api-2.6.1.jar differ diff --git a/libs/hk2-locator-2.6.1.jar b/libs/hk2-locator-2.6.1.jar new file mode 100644 index 0000000..0906bd1 Binary files /dev/null and b/libs/hk2-locator-2.6.1.jar differ diff --git a/libs/hk2-utils-2.6.1.jar b/libs/hk2-utils-2.6.1.jar new file mode 100644 index 0000000..768bc48 Binary files /dev/null and b/libs/hk2-utils-2.6.1.jar differ diff --git a/libs/jackson-annotations-2.13.5.jar b/libs/jackson-annotations-2.13.5.jar new file mode 100644 index 0000000..20ecaed Binary files /dev/null and b/libs/jackson-annotations-2.13.5.jar differ diff --git a/libs/jackson-core-2.13.5.jar b/libs/jackson-core-2.13.5.jar new file mode 100644 index 0000000..401dee3 Binary files /dev/null and b/libs/jackson-core-2.13.5.jar differ diff --git a/libs/jackson-databind-2.13.5.jar b/libs/jackson-databind-2.13.5.jar new file mode 100644 index 0000000..fde442b Binary files /dev/null and b/libs/jackson-databind-2.13.5.jar differ diff --git a/libs/jackson-dataformat-csv-2.13.5.jar b/libs/jackson-dataformat-csv-2.13.5.jar new file mode 100644 index 0000000..08569aa Binary files /dev/null and b/libs/jackson-dataformat-csv-2.13.5.jar differ diff --git a/libs/jackson-datatype-jdk8-2.13.5.jar b/libs/jackson-datatype-jdk8-2.13.5.jar new file mode 100644 index 0000000..b002723 Binary files /dev/null and b/libs/jackson-datatype-jdk8-2.13.5.jar differ diff --git a/libs/jackson-jaxrs-base-2.13.5.jar b/libs/jackson-jaxrs-base-2.13.5.jar new file mode 100644 index 0000000..7876a40 Binary files /dev/null and b/libs/jackson-jaxrs-base-2.13.5.jar differ diff --git a/libs/jackson-jaxrs-json-provider-2.13.5.jar b/libs/jackson-jaxrs-json-provider-2.13.5.jar new file mode 100644 index 0000000..334b418 Binary files /dev/null and b/libs/jackson-jaxrs-json-provider-2.13.5.jar differ diff --git a/libs/jackson-module-jaxb-annotations-2.13.5.jar b/libs/jackson-module-jaxb-annotations-2.13.5.jar new file mode 100644 index 0000000..de7c0f3 Binary files /dev/null and b/libs/jackson-module-jaxb-annotations-2.13.5.jar differ diff --git a/libs/jackson-module-scala_2.13-2.13.5.jar b/libs/jackson-module-scala_2.13-2.13.5.jar new file mode 100644 index 0000000..fd42a69 Binary files /dev/null and b/libs/jackson-module-scala_2.13-2.13.5.jar differ diff --git a/libs/jakarta.activation-api-1.2.2.jar b/libs/jakarta.activation-api-1.2.2.jar new file mode 100644 index 0000000..3cc969d Binary files /dev/null and b/libs/jakarta.activation-api-1.2.2.jar differ diff --git a/libs/jakarta.annotation-api-1.3.5.jar b/libs/jakarta.annotation-api-1.3.5.jar new file mode 100644 index 0000000..606d992 Binary files /dev/null and b/libs/jakarta.annotation-api-1.3.5.jar differ diff --git a/libs/jakarta.inject-2.6.1.jar b/libs/jakarta.inject-2.6.1.jar new file mode 100644 index 0000000..cee6acd Binary files /dev/null and b/libs/jakarta.inject-2.6.1.jar differ diff --git a/libs/jakarta.validation-api-2.0.2.jar b/libs/jakarta.validation-api-2.0.2.jar new file mode 100644 index 0000000..d68c9f7 Binary files /dev/null and b/libs/jakarta.validation-api-2.0.2.jar differ diff --git a/libs/jakarta.ws.rs-api-2.1.6.jar b/libs/jakarta.ws.rs-api-2.1.6.jar new file mode 100644 index 0000000..4850659 Binary files /dev/null and b/libs/jakarta.ws.rs-api-2.1.6.jar differ diff --git a/libs/jakarta.xml.bind-api-2.3.3.jar b/libs/jakarta.xml.bind-api-2.3.3.jar new file mode 100644 index 0000000..b8c7dc1 Binary files /dev/null and b/libs/jakarta.xml.bind-api-2.3.3.jar differ diff --git a/libs/javassist-3.29.2-GA.jar b/libs/javassist-3.29.2-GA.jar new file mode 100644 index 0000000..68fc301 Binary files /dev/null and b/libs/javassist-3.29.2-GA.jar differ diff --git a/libs/javax.activation-api-1.2.0.jar b/libs/javax.activation-api-1.2.0.jar new file mode 100644 index 0000000..986c365 Binary files /dev/null and b/libs/javax.activation-api-1.2.0.jar differ diff --git a/libs/javax.annotation-api-1.3.2.jar b/libs/javax.annotation-api-1.3.2.jar new file mode 100644 index 0000000..a8a470a Binary files /dev/null and b/libs/javax.annotation-api-1.3.2.jar differ diff --git a/libs/javax.servlet-api-3.1.0.jar b/libs/javax.servlet-api-3.1.0.jar new file mode 100644 index 0000000..6b14c3d Binary files /dev/null and b/libs/javax.servlet-api-3.1.0.jar differ diff --git a/libs/javax.ws.rs-api-2.1.1.jar b/libs/javax.ws.rs-api-2.1.1.jar new file mode 100644 index 0000000..3eabbf0 Binary files /dev/null and b/libs/javax.ws.rs-api-2.1.1.jar differ diff --git a/libs/jaxb-api-2.3.1.jar b/libs/jaxb-api-2.3.1.jar new file mode 100644 index 0000000..4565865 Binary files /dev/null and b/libs/jaxb-api-2.3.1.jar differ diff --git a/libs/jersey-client-2.39.1.jar b/libs/jersey-client-2.39.1.jar new file mode 100644 index 0000000..ebe07a1 Binary files /dev/null and b/libs/jersey-client-2.39.1.jar differ diff --git a/libs/jersey-common-2.39.1.jar b/libs/jersey-common-2.39.1.jar new file mode 100644 index 0000000..6ef0176 Binary files /dev/null and b/libs/jersey-common-2.39.1.jar differ diff --git a/libs/jersey-container-servlet-2.39.1.jar b/libs/jersey-container-servlet-2.39.1.jar new file mode 100644 index 0000000..451d721 Binary files /dev/null and b/libs/jersey-container-servlet-2.39.1.jar differ diff --git a/libs/jersey-container-servlet-core-2.39.1.jar b/libs/jersey-container-servlet-core-2.39.1.jar new file mode 100644 index 0000000..af3e491 Binary files /dev/null and b/libs/jersey-container-servlet-core-2.39.1.jar differ diff --git a/libs/jersey-hk2-2.39.1.jar b/libs/jersey-hk2-2.39.1.jar new file mode 100644 index 0000000..ff3596f Binary files /dev/null and b/libs/jersey-hk2-2.39.1.jar differ diff --git a/libs/jersey-server-2.39.1.jar b/libs/jersey-server-2.39.1.jar new file mode 100644 index 0000000..b9240a2 Binary files /dev/null and b/libs/jersey-server-2.39.1.jar differ diff --git a/libs/jetty-client-9.4.52.v20230823.jar b/libs/jetty-client-9.4.52.v20230823.jar new file mode 100644 index 0000000..ed5b990 Binary files /dev/null and b/libs/jetty-client-9.4.52.v20230823.jar differ diff --git a/libs/jetty-continuation-9.4.52.v20230823.jar b/libs/jetty-continuation-9.4.52.v20230823.jar new file mode 100644 index 0000000..3922cf5 Binary files /dev/null and b/libs/jetty-continuation-9.4.52.v20230823.jar differ diff --git a/libs/jetty-http-9.4.52.v20230823.jar b/libs/jetty-http-9.4.52.v20230823.jar new file mode 100644 index 0000000..818a7c8 Binary files /dev/null and b/libs/jetty-http-9.4.52.v20230823.jar differ diff --git a/libs/jetty-io-9.4.52.v20230823.jar b/libs/jetty-io-9.4.52.v20230823.jar new file mode 100644 index 0000000..6039858 Binary files /dev/null and b/libs/jetty-io-9.4.52.v20230823.jar differ diff --git a/libs/jetty-security-9.4.52.v20230823.jar b/libs/jetty-security-9.4.52.v20230823.jar new file mode 100644 index 0000000..602783e Binary files /dev/null and b/libs/jetty-security-9.4.52.v20230823.jar differ diff --git a/libs/jetty-server-9.4.52.v20230823.jar b/libs/jetty-server-9.4.52.v20230823.jar new file mode 100644 index 0000000..b96c99c Binary files /dev/null and b/libs/jetty-server-9.4.52.v20230823.jar differ diff --git a/libs/jetty-servlet-9.4.52.v20230823.jar b/libs/jetty-servlet-9.4.52.v20230823.jar new file mode 100644 index 0000000..ff4ca39 Binary files /dev/null and b/libs/jetty-servlet-9.4.52.v20230823.jar differ diff --git a/libs/jetty-servlets-9.4.52.v20230823.jar b/libs/jetty-servlets-9.4.52.v20230823.jar new file mode 100644 index 0000000..bde3e0f Binary files /dev/null and b/libs/jetty-servlets-9.4.52.v20230823.jar differ diff --git a/libs/jetty-util-9.4.52.v20230823.jar b/libs/jetty-util-9.4.52.v20230823.jar new file mode 100644 index 0000000..0e993ba Binary files /dev/null and b/libs/jetty-util-9.4.52.v20230823.jar differ diff --git a/libs/jetty-util-ajax-9.4.52.v20230823.jar b/libs/jetty-util-ajax-9.4.52.v20230823.jar new file mode 100644 index 0000000..9b78b98 Binary files /dev/null and b/libs/jetty-util-ajax-9.4.52.v20230823.jar differ diff --git a/libs/jline-3.22.0.jar b/libs/jline-3.22.0.jar new file mode 100644 index 0000000..b016252 Binary files /dev/null and b/libs/jline-3.22.0.jar differ diff --git a/libs/jopt-simple-5.0.4.jar b/libs/jopt-simple-5.0.4.jar new file mode 100644 index 0000000..317b2b0 Binary files /dev/null and b/libs/jopt-simple-5.0.4.jar differ diff --git a/libs/jose4j-0.9.3.jar b/libs/jose4j-0.9.3.jar new file mode 100644 index 0000000..e073555 Binary files /dev/null and b/libs/jose4j-0.9.3.jar differ diff --git a/libs/jsr305-3.0.2.jar b/libs/jsr305-3.0.2.jar new file mode 100644 index 0000000..59222d9 Binary files /dev/null and b/libs/jsr305-3.0.2.jar differ diff --git a/libs/kafka-clients-3.6.0.jar b/libs/kafka-clients-3.6.0.jar new file mode 100644 index 0000000..fc6c454 Binary files /dev/null and b/libs/kafka-clients-3.6.0.jar differ diff --git a/libs/kafka-group-coordinator-3.6.0.jar b/libs/kafka-group-coordinator-3.6.0.jar new file mode 100644 index 0000000..a971343 Binary files /dev/null and b/libs/kafka-group-coordinator-3.6.0.jar differ diff --git a/libs/kafka-log4j-appender-3.6.0.jar b/libs/kafka-log4j-appender-3.6.0.jar new file mode 100644 index 0000000..b6f7961 Binary files /dev/null and b/libs/kafka-log4j-appender-3.6.0.jar differ diff --git a/libs/kafka-metadata-3.6.0.jar b/libs/kafka-metadata-3.6.0.jar new file mode 100644 index 0000000..c4c1962 Binary files /dev/null and b/libs/kafka-metadata-3.6.0.jar differ diff --git a/libs/kafka-raft-3.6.0.jar b/libs/kafka-raft-3.6.0.jar new file mode 100644 index 0000000..9943d76 Binary files /dev/null and b/libs/kafka-raft-3.6.0.jar differ diff --git a/libs/kafka-server-common-3.6.0.jar b/libs/kafka-server-common-3.6.0.jar new file mode 100644 index 0000000..5dac68c Binary files /dev/null and b/libs/kafka-server-common-3.6.0.jar differ diff --git a/libs/kafka-shell-3.6.0.jar b/libs/kafka-shell-3.6.0.jar new file mode 100644 index 0000000..0477e65 Binary files /dev/null and b/libs/kafka-shell-3.6.0.jar differ diff --git a/libs/kafka-storage-3.6.0.jar b/libs/kafka-storage-3.6.0.jar new file mode 100644 index 0000000..7a0ff2c Binary files /dev/null and b/libs/kafka-storage-3.6.0.jar differ diff --git a/libs/kafka-storage-api-3.6.0.jar b/libs/kafka-storage-api-3.6.0.jar new file mode 100644 index 0000000..e78342c Binary files /dev/null and b/libs/kafka-storage-api-3.6.0.jar differ diff --git a/libs/kafka-streams-3.6.0.jar b/libs/kafka-streams-3.6.0.jar new file mode 100644 index 0000000..d61f116 Binary files /dev/null and b/libs/kafka-streams-3.6.0.jar differ diff --git a/libs/kafka-streams-examples-3.6.0.jar b/libs/kafka-streams-examples-3.6.0.jar new file mode 100644 index 0000000..07fe001 Binary files /dev/null and b/libs/kafka-streams-examples-3.6.0.jar differ diff --git a/libs/kafka-streams-scala_2.13-3.6.0.jar b/libs/kafka-streams-scala_2.13-3.6.0.jar new file mode 100644 index 0000000..0749da4 Binary files /dev/null and b/libs/kafka-streams-scala_2.13-3.6.0.jar differ diff --git a/libs/kafka-streams-test-utils-3.6.0.jar b/libs/kafka-streams-test-utils-3.6.0.jar new file mode 100644 index 0000000..892eb65 Binary files /dev/null and b/libs/kafka-streams-test-utils-3.6.0.jar differ diff --git a/libs/kafka-tools-3.6.0.jar b/libs/kafka-tools-3.6.0.jar new file mode 100644 index 0000000..ea17cc2 Binary files /dev/null and b/libs/kafka-tools-3.6.0.jar differ diff --git a/libs/kafka-tools-api-3.6.0.jar b/libs/kafka-tools-api-3.6.0.jar new file mode 100644 index 0000000..10be1f7 Binary files /dev/null and b/libs/kafka-tools-api-3.6.0.jar differ diff --git a/libs/kafka_2.13-3.6.0.jar b/libs/kafka_2.13-3.6.0.jar new file mode 100644 index 0000000..21b4f97 Binary files /dev/null and b/libs/kafka_2.13-3.6.0.jar differ diff --git a/libs/lz4-java-1.8.0.jar b/libs/lz4-java-1.8.0.jar new file mode 100644 index 0000000..89c644b Binary files /dev/null and b/libs/lz4-java-1.8.0.jar differ diff --git a/libs/maven-artifact-3.8.8.jar b/libs/maven-artifact-3.8.8.jar new file mode 100644 index 0000000..17ee3c2 Binary files /dev/null and b/libs/maven-artifact-3.8.8.jar differ diff --git a/libs/metrics-core-2.2.0.jar b/libs/metrics-core-2.2.0.jar new file mode 100644 index 0000000..0f6d1cb Binary files /dev/null and b/libs/metrics-core-2.2.0.jar differ diff --git a/libs/metrics-core-4.1.12.1.jar b/libs/metrics-core-4.1.12.1.jar new file mode 100644 index 0000000..94fc834 Binary files /dev/null and b/libs/metrics-core-4.1.12.1.jar differ diff --git a/libs/netty-buffer-4.1.94.Final.jar b/libs/netty-buffer-4.1.94.Final.jar new file mode 100644 index 0000000..b7ca7dd Binary files /dev/null and b/libs/netty-buffer-4.1.94.Final.jar differ diff --git a/libs/netty-codec-4.1.94.Final.jar b/libs/netty-codec-4.1.94.Final.jar new file mode 100644 index 0000000..a3f989c Binary files /dev/null and b/libs/netty-codec-4.1.94.Final.jar differ diff --git a/libs/netty-common-4.1.94.Final.jar b/libs/netty-common-4.1.94.Final.jar new file mode 100644 index 0000000..98d8abd Binary files /dev/null and b/libs/netty-common-4.1.94.Final.jar differ diff --git a/libs/netty-handler-4.1.94.Final.jar b/libs/netty-handler-4.1.94.Final.jar new file mode 100644 index 0000000..716799a Binary files /dev/null and b/libs/netty-handler-4.1.94.Final.jar differ diff --git a/libs/netty-resolver-4.1.94.Final.jar b/libs/netty-resolver-4.1.94.Final.jar new file mode 100644 index 0000000..e915955 Binary files /dev/null and b/libs/netty-resolver-4.1.94.Final.jar differ diff --git a/libs/netty-transport-4.1.94.Final.jar b/libs/netty-transport-4.1.94.Final.jar new file mode 100644 index 0000000..6f40952 Binary files /dev/null and b/libs/netty-transport-4.1.94.Final.jar differ diff --git a/libs/netty-transport-classes-epoll-4.1.94.Final.jar b/libs/netty-transport-classes-epoll-4.1.94.Final.jar new file mode 100644 index 0000000..5ed85e6 Binary files /dev/null and b/libs/netty-transport-classes-epoll-4.1.94.Final.jar differ diff --git a/libs/netty-transport-native-epoll-4.1.94.Final.jar b/libs/netty-transport-native-epoll-4.1.94.Final.jar new file mode 100644 index 0000000..e46a4c8 Binary files /dev/null and b/libs/netty-transport-native-epoll-4.1.94.Final.jar differ diff --git a/libs/netty-transport-native-unix-common-4.1.94.Final.jar b/libs/netty-transport-native-unix-common-4.1.94.Final.jar new file mode 100644 index 0000000..a0fa922 Binary files /dev/null and b/libs/netty-transport-native-unix-common-4.1.94.Final.jar differ diff --git a/libs/osgi-resource-locator-1.0.3.jar b/libs/osgi-resource-locator-1.0.3.jar new file mode 100644 index 0000000..0f3c386 Binary files /dev/null and b/libs/osgi-resource-locator-1.0.3.jar differ diff --git a/libs/paranamer-2.8.jar b/libs/paranamer-2.8.jar new file mode 100644 index 0000000..0bf659b Binary files /dev/null and b/libs/paranamer-2.8.jar differ diff --git a/libs/pcollections-4.0.1.jar b/libs/pcollections-4.0.1.jar new file mode 100644 index 0000000..5d5ae28 Binary files /dev/null and b/libs/pcollections-4.0.1.jar differ diff --git a/libs/plexus-utils-3.3.1.jar b/libs/plexus-utils-3.3.1.jar new file mode 100644 index 0000000..956c653 Binary files /dev/null and b/libs/plexus-utils-3.3.1.jar differ diff --git a/libs/reflections-0.10.2.jar b/libs/reflections-0.10.2.jar new file mode 100644 index 0000000..a596f55 Binary files /dev/null and b/libs/reflections-0.10.2.jar differ diff --git a/libs/reload4j-1.2.25.jar b/libs/reload4j-1.2.25.jar new file mode 100644 index 0000000..1b51d62 Binary files /dev/null and b/libs/reload4j-1.2.25.jar differ diff --git a/libs/rocksdbjni-7.9.2.jar b/libs/rocksdbjni-7.9.2.jar new file mode 100644 index 0000000..4aedbe3 Binary files /dev/null and b/libs/rocksdbjni-7.9.2.jar differ diff --git a/libs/scala-collection-compat_2.13-2.10.0.jar b/libs/scala-collection-compat_2.13-2.10.0.jar new file mode 100644 index 0000000..82c221d Binary files /dev/null and b/libs/scala-collection-compat_2.13-2.10.0.jar differ diff --git a/libs/scala-java8-compat_2.13-1.0.2.jar b/libs/scala-java8-compat_2.13-1.0.2.jar new file mode 100644 index 0000000..11bc17e Binary files /dev/null and b/libs/scala-java8-compat_2.13-1.0.2.jar differ diff --git a/libs/scala-library-2.13.11.jar b/libs/scala-library-2.13.11.jar new file mode 100644 index 0000000..465a79e Binary files /dev/null and b/libs/scala-library-2.13.11.jar differ diff --git a/libs/scala-logging_2.13-3.9.4.jar b/libs/scala-logging_2.13-3.9.4.jar new file mode 100644 index 0000000..107e741 Binary files /dev/null and b/libs/scala-logging_2.13-3.9.4.jar differ diff --git a/libs/scala-reflect-2.13.11.jar b/libs/scala-reflect-2.13.11.jar new file mode 100644 index 0000000..7019143 Binary files /dev/null and b/libs/scala-reflect-2.13.11.jar differ diff --git a/libs/slf4j-api-1.7.36.jar b/libs/slf4j-api-1.7.36.jar new file mode 100644 index 0000000..7d3ce68 Binary files /dev/null and b/libs/slf4j-api-1.7.36.jar differ diff --git a/libs/slf4j-reload4j-1.7.36.jar b/libs/slf4j-reload4j-1.7.36.jar new file mode 100644 index 0000000..b007cc7 Binary files /dev/null and b/libs/slf4j-reload4j-1.7.36.jar differ diff --git a/libs/snappy-java-1.1.10.4.jar b/libs/snappy-java-1.1.10.4.jar new file mode 100644 index 0000000..2c0b86b Binary files /dev/null and b/libs/snappy-java-1.1.10.4.jar differ diff --git a/libs/swagger-annotations-2.2.8.jar b/libs/swagger-annotations-2.2.8.jar new file mode 100644 index 0000000..9f71ba0 Binary files /dev/null and b/libs/swagger-annotations-2.2.8.jar differ diff --git a/libs/trogdor-3.6.0.jar b/libs/trogdor-3.6.0.jar new file mode 100644 index 0000000..2cf48f4 Binary files /dev/null and b/libs/trogdor-3.6.0.jar differ diff --git a/libs/zookeeper-3.8.2.jar b/libs/zookeeper-3.8.2.jar new file mode 100644 index 0000000..efa16ed Binary files /dev/null and b/libs/zookeeper-3.8.2.jar differ diff --git a/libs/zookeeper-jute-3.8.2.jar b/libs/zookeeper-jute-3.8.2.jar new file mode 100644 index 0000000..013c397 Binary files /dev/null and b/libs/zookeeper-jute-3.8.2.jar differ diff --git a/libs/zstd-jni-1.5.5-1.jar b/libs/zstd-jni-1.5.5-1.jar new file mode 100644 index 0000000..40b44c0 Binary files /dev/null and b/libs/zstd-jni-1.5.5-1.jar differ diff --git a/licenses/CDDL+GPL-1.1 b/licenses/CDDL+GPL-1.1 new file mode 100644 index 0000000..4b156e6 --- /dev/null +++ b/licenses/CDDL+GPL-1.1 @@ -0,0 +1,760 @@ +COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.1 + +1. Definitions. + + 1.1. "Contributor" means each individual or entity that creates or + contributes to the creation of Modifications. + + 1.2. "Contributor Version" means the combination of the Original + Software, prior Modifications used by a Contributor (if any), and + the Modifications made by that particular Contributor. + + 1.3. "Covered Software" means (a) the Original Software, or (b) + Modifications, or (c) the combination of files containing Original + Software with files containing Modifications, in each case including + portions thereof. + + 1.4. "Executable" means the Covered Software in any form other than + Source Code. + + 1.5. "Initial Developer" means the individual or entity that first + makes Original Software available under this License. + + 1.6. "Larger Work" means a work which combines Covered Software or + portions thereof with code not governed by the terms of this License. + + 1.7. "License" means this document. + + 1.8. "Licensable" means having the right to grant, to the maximum + extent possible, whether at the time of the initial grant or + subsequently acquired, any and all of the rights conveyed herein. + + 1.9. "Modifications" means the Source Code and Executable form of + any of the following: + + A. Any file that results from an addition to, deletion from or + modification of the contents of a file containing Original Software + or previous Modifications; + + B. Any new file that contains any part of the Original Software or + previous Modification; or + + C. Any new file that is contributed or otherwise made available + under the terms of this License. + + 1.10. "Original Software" means the Source Code and Executable form + of computer software code that is originally released under this + License. + + 1.11. "Patent Claims" means any patent claim(s), now owned or + hereafter acquired, including without limitation, method, process, + and apparatus claims, in any patent Licensable by grantor. + + 1.12. "Source Code" means (a) the common form of computer software + code in which modifications are made and (b) associated + documentation included in or with such code. + + 1.13. "You" (or "Your") means an individual or a legal entity + exercising rights under, and complying with all of the terms of, + this License. For legal entities, "You" includes any entity which + controls, is controlled by, or is under common control with You. For + purposes of this definition, "control" means (a) the power, direct + or indirect, to cause the direction or management of such entity, + whether by contract or otherwise, or (b) ownership of more than + fifty percent (50%) of the outstanding shares or beneficial + ownership of such entity. + +2. License Grants. + + 2.1. The Initial Developer Grant. + + Conditioned upon Your compliance with Section 3.1 below and subject + to third party intellectual property claims, the Initial Developer + hereby grants You a world-wide, royalty-free, non-exclusive license: + + (a) under intellectual property rights (other than patent or + trademark) Licensable by Initial Developer, to use, reproduce, + modify, display, perform, sublicense and distribute the Original + Software (or portions thereof), with or without Modifications, + and/or as part of a Larger Work; and + + (b) under Patent Claims infringed by the making, using or selling of + Original Software, to make, have made, use, practice, sell, and + offer for sale, and/or otherwise dispose of the Original Software + (or portions thereof). + + (c) The licenses granted in Sections 2.1(a) and (b) are effective on + the date Initial Developer first distributes or otherwise makes the + Original Software available to a third party under the terms of this + License. + + (d) Notwithstanding Section 2.1(b) above, no patent license is + granted: (1) for code that You delete from the Original Software, or + (2) for infringements caused by: (i) the modification of the + Original Software, or (ii) the combination of the Original Software + with other software or devices. + + 2.2. Contributor Grant. + + Conditioned upon Your compliance with Section 3.1 below and subject + to third party intellectual property claims, each Contributor hereby + grants You a world-wide, royalty-free, non-exclusive license: + + (a) under intellectual property rights (other than patent or + trademark) Licensable by Contributor to use, reproduce, modify, + display, perform, sublicense and distribute the Modifications + created by such Contributor (or portions thereof), either on an + unmodified basis, with other Modifications, as Covered Software + and/or as part of a Larger Work; and + + (b) under Patent Claims infringed by the making, using, or selling + of Modifications made by that Contributor either alone and/or in + combination with its Contributor Version (or portions of such + combination), to make, use, sell, offer for sale, have made, and/or + otherwise dispose of: (1) Modifications made by that Contributor (or + portions thereof); and (2) the combination of Modifications made by + that Contributor with its Contributor Version (or portions of such + combination). + + (c) The licenses granted in Sections 2.2(a) and 2.2(b) are effective + on the date Contributor first distributes or otherwise makes the + Modifications available to a third party. + + (d) Notwithstanding Section 2.2(b) above, no patent license is + granted: (1) for any code that Contributor has deleted from the + Contributor Version; (2) for infringements caused by: (i) third + party modifications of Contributor Version, or (ii) the combination + of Modifications made by that Contributor with other software + (except as part of the Contributor Version) or other devices; or (3) + under Patent Claims infringed by Covered Software in the absence of + Modifications made by that Contributor. + +3. Distribution Obligations. + + 3.1. Availability of Source Code. + + Any Covered Software that You distribute or otherwise make available + in Executable form must also be made available in Source Code form + and that Source Code form must be distributed only under the terms + of this License. You must include a copy of this License with every + copy of the Source Code form of the Covered Software You distribute + or otherwise make available. You must inform recipients of any such + Covered Software in Executable form as to how they can obtain such + Covered Software in Source Code form in a reasonable manner on or + through a medium customarily used for software exchange. + + 3.2. Modifications. + + The Modifications that You create or to which You contribute are + governed by the terms of this License. You represent that You + believe Your Modifications are Your original creation(s) and/or You + have sufficient rights to grant the rights conveyed by this License. + + 3.3. Required Notices. + + You must include a notice in each of Your Modifications that + identifies You as the Contributor of the Modification. You may not + remove or alter any copyright, patent or trademark notices contained + within the Covered Software, or any notices of licensing or any + descriptive text giving attribution to any Contributor or the + Initial Developer. + + 3.4. Application of Additional Terms. + + You may not offer or impose any terms on any Covered Software in + Source Code form that alters or restricts the applicable version of + this License or the recipients' rights hereunder. You may choose to + offer, and to charge a fee for, warranty, support, indemnity or + liability obligations to one or more recipients of Covered Software. + However, you may do so only on Your own behalf, and not on behalf of + the Initial Developer or any Contributor. You must make it + absolutely clear that any such warranty, support, indemnity or + liability obligation is offered by You alone, and You hereby agree + to indemnify the Initial Developer and every Contributor for any + liability incurred by the Initial Developer or such Contributor as a + result of warranty, support, indemnity or liability terms You offer. + + 3.5. Distribution of Executable Versions. + + You may distribute the Executable form of the Covered Software under + the terms of this License or under the terms of a license of Your + choice, which may contain terms different from this License, + provided that You are in compliance with the terms of this License + and that the license for the Executable form does not attempt to + limit or alter the recipient's rights in the Source Code form from + the rights set forth in this License. If You distribute the Covered + Software in Executable form under a different license, You must make + it absolutely clear that any terms which differ from this License + are offered by You alone, not by the Initial Developer or + Contributor. You hereby agree to indemnify the Initial Developer and + every Contributor for any liability incurred by the Initial + Developer or such Contributor as a result of any such terms You offer. + + 3.6. Larger Works. + + You may create a Larger Work by combining Covered Software with + other code not governed by the terms of this License and distribute + the Larger Work as a single product. In such a case, You must make + sure the requirements of this License are fulfilled for the Covered + Software. + +4. Versions of the License. + + 4.1. New Versions. + + Oracle is the initial license steward and may publish revised and/or + new versions of this License from time to time. Each version will be + given a distinguishing version number. Except as provided in Section + 4.3, no one other than the license steward has the right to modify + this License. + + 4.2. Effect of New Versions. + + You may always continue to use, distribute or otherwise make the + Covered Software available under the terms of the version of the + License under which You originally received the Covered Software. If + the Initial Developer includes a notice in the Original Software + prohibiting it from being distributed or otherwise made available + under any subsequent version of the License, You must distribute and + make the Covered Software available under the terms of the version + of the License under which You originally received the Covered + Software. Otherwise, You may also choose to use, distribute or + otherwise make the Covered Software available under the terms of any + subsequent version of the License published by the license steward. + + 4.3. Modified Versions. + + When You are an Initial Developer and You want to create a new + license for Your Original Software, You may create and use a + modified version of this License if You: (a) rename the license and + remove any references to the name of the license steward (except to + note that the license differs from this License); and (b) otherwise + make it clear that the license contains terms which differ from this + License. + +5. DISCLAIMER OF WARRANTY. + + COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, + WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, + INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE + IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR + NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF + THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE + DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY + OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, + REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN + ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS + AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER. + +6. TERMINATION. + + 6.1. This License and the rights granted hereunder will terminate + automatically if You fail to comply with terms herein and fail to + cure such breach within 30 days of becoming aware of the breach. + Provisions which, by their nature, must remain in effect beyond the + termination of this License shall survive. + + 6.2. If You assert a patent infringement claim (excluding + declaratory judgment actions) against Initial Developer or a + Contributor (the Initial Developer or Contributor against whom You + assert such claim is referred to as "Participant") alleging that the + Participant Software (meaning the Contributor Version where the + Participant is a Contributor or the Original Software where the + Participant is the Initial Developer) directly or indirectly + infringes any patent, then any and all rights granted directly or + indirectly to You by such Participant, the Initial Developer (if the + Initial Developer is not the Participant) and all Contributors under + Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice + from Participant terminate prospectively and automatically at the + expiration of such 60 day notice period, unless if within such 60 + day period You withdraw Your claim with respect to the Participant + Software against such Participant either unilaterally or pursuant to + a written agreement with Participant. + + 6.3. If You assert a patent infringement claim against Participant + alleging that the Participant Software directly or indirectly + infringes any patent where such claim is resolved (such as by + license or settlement) prior to the initiation of patent + infringement litigation, then the reasonable value of the licenses + granted by such Participant under Sections 2.1 or 2.2 shall be taken + into account in determining the amount or value of any payment or + license. + + 6.4. In the event of termination under Sections 6.1 or 6.2 above, + all end user licenses that have been validly granted by You or any + distributor hereunder prior to termination (excluding licenses + granted to You by any distributor) shall survive termination. + +7. LIMITATION OF LIABILITY. + + UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT + (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE + INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF + COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE + TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR + CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT + LIMITATION, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER + FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR + LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE + POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT + APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH + PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH + LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR + LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION + AND LIMITATION MAY NOT APPLY TO YOU. + +8. U.S. GOVERNMENT END USERS. + + The Covered Software is a "commercial item," as that term is defined + in 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer + software" (as that term is defined at 48 C.F.R. § + 252.227-7014(a)(1)) and "commercial computer software documentation" + as such terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent + with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 + (June 1995), all U.S. Government End Users acquire Covered Software + with only those rights set forth herein. This U.S. Government Rights + clause is in lieu of, and supersedes, any other FAR, DFAR, or other + clause or provision that addresses Government rights in computer + software under this License. + +9. MISCELLANEOUS. + + This License represents the complete agreement concerning subject + matter hereof. If any provision of this License is held to be + unenforceable, such provision shall be reformed only to the extent + necessary to make it enforceable. This License shall be governed by + the law of the jurisdiction specified in a notice contained within + the Original Software (except to the extent applicable law, if any, + provides otherwise), excluding such jurisdiction's conflict-of-law + provisions. Any litigation relating to this License shall be subject + to the jurisdiction of the courts located in the jurisdiction and + venue specified in a notice contained within the Original Software, + with the losing party responsible for costs, including, without + limitation, court costs and reasonable attorneys' fees and expenses. + The application of the United Nations Convention on Contracts for + the International Sale of Goods is expressly excluded. Any law or + regulation which provides that the language of a contract shall be + construed against the drafter shall not apply to this License. You + agree that You alone are responsible for compliance with the United + States export administration regulations (and the export control + laws and regulation of any other countries) when You use, distribute + or otherwise make available any Covered Software. + +10. RESPONSIBILITY FOR CLAIMS. + + As between Initial Developer and the Contributors, each party is + responsible for claims and damages arising, directly or indirectly, + out of its utilization of rights under this License and You agree to + work with Initial Developer and Contributors to distribute such + responsibility on an equitable basis. Nothing herein is intended or + shall be deemed to constitute any admission of liability. + +------------------------------------------------------------------------ + +NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION +LICENSE (CDDL) + +The code released under the CDDL shall be governed by the laws of the +State of California (excluding conflict-of-law provisions). Any +litigation relating to this License shall be subject to the jurisdiction +of the Federal Courts of the Northern District of California and the +state courts of the State of California, with venue lying in Santa Clara +County, California. + + + + The GNU General Public License (GPL) Version 2, June 1991 + +Copyright (C) 1989, 1991 Free Software Foundation, Inc. +51 Franklin Street, Fifth Floor +Boston, MA 02110-1335 +USA + +Everyone is permitted to copy and distribute verbatim copies +of this license document, but changing it is not allowed. + +Preamble + +The licenses for most software are designed to take away your freedom to +share and change it. By contrast, the GNU General Public License is +intended to guarantee your freedom to share and change free software--to +make sure the software is free for all its users. This General Public +License applies to most of the Free Software Foundation's software and +to any other program whose authors commit to using it. (Some other Free +Software Foundation software is covered by the GNU Library General +Public License instead.) You can apply it to your programs, too. + +When we speak of free software, we are referring to freedom, not price. +Our General Public Licenses are designed to make sure that you have the +freedom to distribute copies of free software (and charge for this +service if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs; and that you know you can do these things. + +To protect your rights, we need to make restrictions that forbid anyone +to deny you these rights or to ask you to surrender the rights. These +restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + +For example, if you distribute copies of such a program, whether gratis +or for a fee, you must give the recipients all the rights that you have. +You must make sure that they, too, receive or can get the source code. +And you must show them these terms so they know their rights. + +We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + +Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + +Finally, any free program is threatened constantly by software patents. +We wish to avoid the danger that redistributors of a free program will +individually obtain patent licenses, in effect making the program +proprietary. To prevent this, we have made it clear that any patent must +be licensed for everyone's free use or not licensed at all. + +The precise terms and conditions for copying, distribution and +modification follow. + +TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + +0. This License applies to any program or other work which contains a +notice placed by the copyright holder saying it may be distributed under +the terms of this General Public License. The "Program", below, refers +to any such program or work, and a "work based on the Program" means +either the Program or any derivative work under copyright law: that is +to say, a work containing the Program or a portion of it, either +verbatim or with modifications and/or translated into another language. +(Hereinafter, translation is included without limitation in the term +"modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of running +the Program is not restricted, and the output from the Program is +covered only if its contents constitute a work based on the Program +(independent of having been made by running the Program). Whether that +is true depends on what the Program does. + +1. You may copy and distribute verbatim copies of the Program's source +code as you receive it, in any medium, provided that you conspicuously +and appropriately publish on each copy an appropriate copyright notice +and disclaimer of warranty; keep intact all the notices that refer to +this License and to the absence of any warranty; and give any other +recipients of the Program a copy of this License along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + +2. You may modify your copy or copies of the Program or any portion of +it, thus forming a work based on the Program, and copy and distribute +such modifications or work under the terms of Section 1 above, provided +that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any part + thereof, to be licensed as a whole at no charge to all third parties + under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a notice + that there is no warranty (or else, saying that you provide a + warranty) and that users may redistribute the program under these + conditions, and telling the user how to view a copy of this License. + (Exception: if the Program itself is interactive but does not + normally print such an announcement, your work based on the Program + is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, and +can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based on +the Program, the distribution of the whole must be on the terms of this +License, whose permissions for other licensees extend to the entire +whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of a +storage or distribution medium does not bring the other work under the +scope of this License. + +3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections 1 + and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your cost + of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer to + distribute corresponding source code. (This alternative is allowed + only for noncommercial distribution and only if you received the + program in object code or executable form with such an offer, in + accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source code +means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to control +compilation and installation of the executable. However, as a special +exception, the source code distributed need not include anything that is +normally distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies the +executable. + +If distribution of executable or object code is made by offering access +to copy from a designated place, then offering equivalent access to copy +the source code from the same place counts as distribution of the source +code, even though third parties are not compelled to copy the source +along with the object code. + +4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt otherwise +to copy, modify, sublicense or distribute the Program is void, and will +automatically terminate your rights under this License. However, parties +who have received copies, or rights, from you under this License will +not have their licenses terminated so long as such parties remain in +full compliance. + +5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and all +its terms and conditions for copying, distributing or modifying the +Program or works based on it. + +6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further restrictions +on the recipients' exercise of the rights granted herein. You are not +responsible for enforcing compliance by third parties to this License. + +7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot distribute +so as to satisfy simultaneously your obligations under this License and +any other pertinent obligations, then as a consequence you may not +distribute the Program at all. For example, if a patent license would +not permit royalty-free redistribution of the Program by all those who +receive copies directly or indirectly through you, then the only way you +could satisfy both it and this License would be to refrain entirely from +distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is implemented +by public license practices. Many people have made generous +contributions to the wide range of software distributed through that +system in reliance on consistent application of that system; it is up to +the author/donor to decide if he or she is willing to distribute +software through any other system and a licensee cannot impose that choice. + +This section is intended to make thoroughly clear what is believed to be +a consequence of the rest of this License. + +8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License may +add an explicit geographical distribution limitation excluding those +countries, so that distribution is permitted only in or among countries +not thus excluded. In such case, this License incorporates the +limitation as if written in the body of this License. + +9. The Free Software Foundation may publish revised and/or new +versions of the General Public License from time to time. Such new +versions will be similar in spirit to the present version, but may +differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Program does not specify a version +number of this License, you may choose any version ever published by the +Free Software Foundation. + +10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the +author to ask for permission. For software which is copyrighted by the +Free Software Foundation, write to the Free Software Foundation; we +sometimes make exceptions for this. Our decision will be guided by the +two goals of preserving the free status of all derivatives of our free +software and of promoting the sharing and reuse of software generally. + +NO WARRANTY + +11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, +EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE +ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH +YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL +NECESSARY SERVICING, REPAIR OR CORRECTION. + +12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR +DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL +DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM +(INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED +INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF +THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR +OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +END OF TERMS AND CONDITIONS + +How to Apply These Terms to Your New Programs + +If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + +To do so, attach the following notices to the program. It is safest to +attach them to the start of each source file to most effectively convey +the exclusion of warranty; and each file should have at least the +"copyright" line and a pointer to where the full notice is found. + + One line to give the program's name and a brief idea of what it does. + Copyright (C) + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type + `show w'. This is free software, and you are welcome to redistribute + it under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the +appropriate parts of the General Public License. Of course, the commands +you use may be called something other than `show w' and `show c'; they +could even be mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + program `Gnomovision' (which makes passes at compilers) written by + James Hacker. + + signature of Ty Coon, 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program +into proprietary programs. If your program is a subroutine library, you +may consider it more useful to permit linking proprietary applications +with the library. If this is what you want to do, use the GNU Library +General Public License instead of this License. + +# + +Certain source files distributed by Oracle America, Inc. and/or its +affiliates are subject to the following clarification and special +exception to the GPLv2, based on the GNU Project exception for its +Classpath libraries, known as the GNU Classpath Exception, but only +where Oracle has expressly included in the particular source file's +header the words "Oracle designates this particular file as subject to +the "Classpath" exception as provided by Oracle in the LICENSE file +that accompanied this code." + +You should also note that Oracle includes multiple, independent +programs in this software package. Some of those programs are provided +under licenses deemed incompatible with the GPLv2 by the Free Software +Foundation and others. For example, the package includes programs +licensed under the Apache License, Version 2.0. Such programs are +licensed to you under their original licenses. + +Oracle facilitates your further distribution of this package by adding +the Classpath Exception to the necessary parts of its GPLv2 code, which +permits you to use that code in combination with other independent +modules not licensed under the GPLv2. However, note that this would +not permit you to commingle code under an incompatible license with +Oracle's GPLv2 licensed code by, for example, cutting and pasting such +code into a file also containing Oracle's GPLv2 licensed code and then +distributing the result. Additionally, if you were to remove the +Classpath Exception from any of the files to which it applies and +distribute the result, you would likely be required to license some or +all of the other code in that distribution under the GPLv2 as well, and +since the GPLv2 is incompatible with the license terms of some items +included in the distribution by Oracle, removing the Classpath +Exception could therefore effectively compromise your ability to +further distribute the package. + +Proceed with caution and we recommend that you obtain the advice of a +lawyer skilled in open source matters before removing the Classpath +Exception or making modifications to this package which may +subsequently be redistributed and/or involve the use of third party +software. + +CLASSPATH EXCEPTION +Linking this library statically or dynamically with other modules is +making a combined work based on this library. Thus, the terms and +conditions of the GNU General Public License version 2 cover the whole +combination. + +As a special exception, the copyright holders of this library give you +permission to link this library with independent modules to produce an +executable, regardless of the license terms of these independent +modules, and to copy and distribute the resulting executable under +terms of your choice, provided that you also meet, for each linked +independent module, the terms and conditions of the license of that +module. An independent module is a module which is not derived from or +based on this library. If you modify this library, you may extend this +exception to your version of the library, but you are not obligated to +do so. If you do not wish to do so, delete this exception statement +from your version. + diff --git a/licenses/DWTFYWTPL b/licenses/DWTFYWTPL new file mode 100644 index 0000000..5a8e332 --- /dev/null +++ b/licenses/DWTFYWTPL @@ -0,0 +1,14 @@ + DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE + Version 2, December 2004 + + Copyright (C) 2004 Sam Hocevar + + Everyone is permitted to copy and distribute verbatim or modified + copies of this license document, and changing it is allowed as long + as the name is changed. + + DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. You just DO WHAT THE FUCK YOU WANT TO. + diff --git a/licenses/argparse-MIT b/licenses/argparse-MIT new file mode 100644 index 0000000..773b0df --- /dev/null +++ b/licenses/argparse-MIT @@ -0,0 +1,23 @@ +/* + * Copyright (C) 2011-2017 Tatsuhiro Tsujikawa + * + * Permission is hereby granted, free of charge, to any person + * obtaining a copy of this software and associated documentation + * files (the "Software"), to deal in the Software without + * restriction, including without limitation the rights to use, copy, + * modify, merge, publish, distribute, sublicense, and/or sell copies + * of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be + * included in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ diff --git a/licenses/checker-qual-MIT b/licenses/checker-qual-MIT new file mode 100644 index 0000000..827585f --- /dev/null +++ b/licenses/checker-qual-MIT @@ -0,0 +1,413 @@ +The Checker Framework +Copyright 2004-present by the Checker Framework developers + + +Most of the Checker Framework is licensed under the GNU General Public +License, version 2 (GPL2), with the classpath exception. The text of this +license appears below. This is the same license used for OpenJDK. + +A few parts of the Checker Framework have more permissive licenses, notably +the parts that you might want to include with your own program. + + * The annotations and utility files are licensed under the MIT License. + (The text of this license also appears below.) This applies to + checker-qual*.jar and checker-util.jar and all the files that appear in + them, which is all files in checker-qual and checker-util directories. + It also applies to the cleanroom implementations of + third-party annotations (in checker/src/testannotations/, + framework/src/main/java/org/jmlspecs/, and + framework/src/main/java/com/google/). + +The Checker Framework includes annotations for some libraries. Those in +.astub files use the MIT License. Those in https://github.com/typetools/jdk +(which appears in the annotated-jdk directory of file checker.jar) use the +GPL2 license. + +Some external libraries that are included with the Checker Framework +distribution have different licenses. Here are some examples. + + * JavaParser is dual licensed under the LGPL or the Apache license -- you + may use it under whichever one you want. (The JavaParser source code + contains a file with the text of the GPL, but it is not clear why, since + JavaParser does not use the GPL.) See + https://github.com/typetools/stubparser . + + * Annotation Tools (https://github.com/typetools/annotation-tools) uses + the MIT license. + + * Libraries in plume-lib (https://github.com/plume-lib/) are licensed + under the MIT License. + +=========================================================================== + +The GNU General Public License (GPL) + +Version 2, June 1991 + +Copyright (C) 1989, 1991 Free Software Foundation, Inc. +59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +Everyone is permitted to copy and distribute verbatim copies of this license +document, but changing it is not allowed. + +Preamble + +The licenses for most software are designed to take away your freedom to share +and change it. By contrast, the GNU General Public License is intended to +guarantee your freedom to share and change free software--to make sure the +software is free for all its users. This General Public License applies to +most of the Free Software Foundation's software and to any other program whose +authors commit to using it. (Some other Free Software Foundation software is +covered by the GNU Library General Public License instead.) You can apply it to +your programs, too. + +When we speak of free software, we are referring to freedom, not price. Our +General Public Licenses are designed to make sure that you have the freedom to +distribute copies of free software (and charge for this service if you wish), +that you receive source code or can get it if you want it, that you can change +the software or use pieces of it in new free programs; and that you know you +can do these things. + +To protect your rights, we need to make restrictions that forbid anyone to deny +you these rights or to ask you to surrender the rights. These restrictions +translate to certain responsibilities for you if you distribute copies of the +software, or if you modify it. + +For example, if you distribute copies of such a program, whether gratis or for +a fee, you must give the recipients all the rights that you have. You must +make sure that they, too, receive or can get the source code. And you must +show them these terms so they know their rights. + +We protect your rights with two steps: (1) copyright the software, and (2) +offer you this license which gives you legal permission to copy, distribute +and/or modify the software. + +Also, for each author's protection and ours, we want to make certain that +everyone understands that there is no warranty for this free software. If the +software is modified by someone else and passed on, we want its recipients to +know that what they have is not the original, so that any problems introduced +by others will not reflect on the original authors' reputations. + +Finally, any free program is threatened constantly by software patents. We +wish to avoid the danger that redistributors of a free program will +individually obtain patent licenses, in effect making the program proprietary. +To prevent this, we have made it clear that any patent must be licensed for +everyone's free use or not licensed at all. + +The precise terms and conditions for copying, distribution and modification +follow. + +TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + +0. This License applies to any program or other work which contains a notice +placed by the copyright holder saying it may be distributed under the terms of +this General Public License. The "Program", below, refers to any such program +or work, and a "work based on the Program" means either the Program or any +derivative work under copyright law: that is to say, a work containing the +Program or a portion of it, either verbatim or with modifications and/or +translated into another language. (Hereinafter, translation is included +without limitation in the term "modification".) Each licensee is addressed as +"you". + +Activities other than copying, distribution and modification are not covered by +this License; they are outside its scope. The act of running the Program is +not restricted, and the output from the Program is covered only if its contents +constitute a work based on the Program (independent of having been made by +running the Program). Whether that is true depends on what the Program does. + +1. You may copy and distribute verbatim copies of the Program's source code as +you receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice and +disclaimer of warranty; keep intact all the notices that refer to this License +and to the absence of any warranty; and give any other recipients of the +Program a copy of this License along with the Program. + +You may charge a fee for the physical act of transferring a copy, and you may +at your option offer warranty protection in exchange for a fee. + +2. You may modify your copy or copies of the Program or any portion of it, thus +forming a work based on the Program, and copy and distribute such modifications +or work under the terms of Section 1 above, provided that you also meet all of +these conditions: + + a) You must cause the modified files to carry prominent notices stating + that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in whole or + in part contains or is derived from the Program or any part thereof, to be + licensed as a whole at no charge to all third parties under the terms of + this License. + + c) If the modified program normally reads commands interactively when run, + you must cause it, when started running for such interactive use in the + most ordinary way, to print or display an announcement including an + appropriate copyright notice and a notice that there is no warranty (or + else, saying that you provide a warranty) and that users may redistribute + the program under these conditions, and telling the user how to view a copy + of this License. (Exception: if the Program itself is interactive but does + not normally print such an announcement, your work based on the Program is + not required to print an announcement.) + +These requirements apply to the modified work as a whole. If identifiable +sections of that work are not derived from the Program, and can be reasonably +considered independent and separate works in themselves, then this License, and +its terms, do not apply to those sections when you distribute them as separate +works. But when you distribute the same sections as part of a whole which is a +work based on the Program, the distribution of the whole must be on the terms +of this License, whose permissions for other licensees extend to the entire +whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest your +rights to work written entirely by you; rather, the intent is to exercise the +right to control the distribution of derivative or collective works based on +the Program. + +In addition, mere aggregation of another work not based on the Program with the +Program (or with a work based on the Program) on a volume of a storage or +distribution medium does not bring the other work under the scope of this +License. + +3. You may copy and distribute the Program (or a work based on it, under +Section 2) in object code or executable form under the terms of Sections 1 and +2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable source + code, which must be distributed under the terms of Sections 1 and 2 above + on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three years, to + give any third party, for a charge no more than your cost of physically + performing source distribution, a complete machine-readable copy of the + corresponding source code, to be distributed under the terms of Sections 1 + and 2 above on a medium customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer to + distribute corresponding source code. (This alternative is allowed only + for noncommercial distribution and only if you received the program in + object code or executable form with such an offer, in accord with + Subsection b above.) + +The source code for a work means the preferred form of the work for making +modifications to it. For an executable work, complete source code means all +the source code for all modules it contains, plus any associated interface +definition files, plus the scripts used to control compilation and installation +of the executable. However, as a special exception, the source code +distributed need not include anything that is normally distributed (in either +source or binary form) with the major components (compiler, kernel, and so on) +of the operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the source +code from the same place counts as distribution of the source code, even though +third parties are not compelled to copy the source along with the object code. + +4. You may not copy, modify, sublicense, or distribute the Program except as +expressly provided under this License. Any attempt otherwise to copy, modify, +sublicense or distribute the Program is void, and will automatically terminate +your rights under this License. However, parties who have received copies, or +rights, from you under this License will not have their licenses terminated so +long as such parties remain in full compliance. + +5. You are not required to accept this License, since you have not signed it. +However, nothing else grants you permission to modify or distribute the Program +or its derivative works. These actions are prohibited by law if you do not +accept this License. Therefore, by modifying or distributing the Program (or +any work based on the Program), you indicate your acceptance of this License to +do so, and all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + +6. Each time you redistribute the Program (or any work based on the Program), +the recipient automatically receives a license from the original licensor to +copy, distribute or modify the Program subject to these terms and conditions. +You may not impose any further restrictions on the recipients' exercise of the +rights granted herein. You are not responsible for enforcing compliance by +third parties to this License. + +7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), conditions +are imposed on you (whether by court order, agreement or otherwise) that +contradict the conditions of this License, they do not excuse you from the +conditions of this License. If you cannot distribute so as to satisfy +simultaneously your obligations under this License and any other pertinent +obligations, then as a consequence you may not distribute the Program at all. +For example, if a patent license would not permit royalty-free redistribution +of the Program by all those who receive copies directly or indirectly through +you, then the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under any +particular circumstance, the balance of the section is intended to apply and +the section as a whole is intended to apply in other circumstances. + +It is not the purpose of this section to induce you to infringe any patents or +other property right claims or to contest validity of any such claims; this +section has the sole purpose of protecting the integrity of the free software +distribution system, which is implemented by public license practices. Many +people have made generous contributions to the wide range of software +distributed through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing to +distribute software through any other system and a licensee cannot impose that +choice. + +This section is intended to make thoroughly clear what is believed to be a +consequence of the rest of this License. + +8. If the distribution and/or use of the Program is restricted in certain +countries either by patents or by copyrighted interfaces, the original +copyright holder who places the Program under this License may add an explicit +geographical distribution limitation excluding those countries, so that +distribution is permitted only in or among countries not thus excluded. In +such case, this License incorporates the limitation as if written in the body +of this License. + +9. The Free Software Foundation may publish revised and/or new versions of the +General Public License from time to time. Such new versions will be similar in +spirit to the present version, but may differ in detail to address new problems +or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any later +version", you have the option of following the terms and conditions either of +that version or of any later version published by the Free Software Foundation. +If the Program does not specify a version number of this License, you may +choose any version ever published by the Free Software Foundation. + +10. If you wish to incorporate parts of the Program into other free programs +whose distribution conditions are different, write to the author to ask for +permission. For software which is copyrighted by the Free Software Foundation, +write to the Free Software Foundation; we sometimes make exceptions for this. +Our decision will be guided by the two goals of preserving the free status of +all derivatives of our free software and of promoting the sharing and reuse of +software generally. + +NO WARRANTY + +11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR +THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE +STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE +PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, +INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND +FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND +PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, +YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + +12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL +ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE +PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR +INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA +BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER +OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +END OF TERMS AND CONDITIONS + +How to Apply These Terms to Your New Programs + +If you develop a new program, and you want it to be of the greatest possible +use to the public, the best way to achieve this is to make it free software +which everyone can redistribute and change under these terms. + +To do so, attach the following notices to the program. It is safest to attach +them to the start of each source file to most effectively convey the exclusion +of warranty; and each file should have at least the "copyright" line and a +pointer to where the full notice is found. + + One line to give the program's name and a brief idea of what it does. + + Copyright (C) + + This program is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by the Free + Software Foundation; either version 2 of the License, or (at your option) + any later version. + + This program is distributed in the hope that it will be useful, but WITHOUT + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + more details. + + You should have received a copy of the GNU General Public License along + with this program; if not, write to the Free Software Foundation, Inc., 59 + Temple Place, Suite 330, Boston, MA 02111-1307 USA + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this when it +starts in an interactive mode: + + Gnomovision version 69, Copyright (C) year name of author Gnomovision comes + with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free + software, and you are welcome to redistribute it under certain conditions; + type 'show c' for details. + +The hypothetical commands 'show w' and 'show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may be +called something other than 'show w' and 'show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your school, +if any, to sign a "copyright disclaimer" for the program, if necessary. Here +is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + 'Gnomovision' (which makes passes at compilers) written by James Hacker. + + signature of Ty Coon, 1 April 1989 + + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General Public +License instead of this License. + + +"CLASSPATH" EXCEPTION TO THE GPL + +Certain source files distributed by Oracle America and/or its affiliates are +subject to the following clarification and special exception to the GPL, but +only where Oracle has expressly included in the particular source file's header +the words "Oracle designates this particular file as subject to the "Classpath" +exception as provided by Oracle in the LICENSE file that accompanied this code." + + Linking this library statically or dynamically with other modules is making + a combined work based on this library. Thus, the terms and conditions of + the GNU General Public License cover the whole combination. + + As a special exception, the copyright holders of this library give you + permission to link this library with independent modules to produce an + executable, regardless of the license terms of these independent modules, + and to copy and distribute the resulting executable under terms of your + choice, provided that you also meet, for each linked independent module, + the terms and conditions of the license of that module. An independent + module is a module which is not derived from or based on this library. If + you modify this library, you may extend this exception to your version of + the library, but you are not obligated to do so. If you do not wish to do + so, delete this exception statement from your version. + +=========================================================================== + +MIT License: + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. + +=========================================================================== diff --git a/licenses/eclipse-distribution-license-1.0 b/licenses/eclipse-distribution-license-1.0 new file mode 100644 index 0000000..5f06513 --- /dev/null +++ b/licenses/eclipse-distribution-license-1.0 @@ -0,0 +1,13 @@ +Eclipse Distribution License - v 1.0 + +Copyright (c) 2007, Eclipse Foundation, Inc. and its licensors. + +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. +* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. +* Neither the name of the Eclipse Foundation, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/eclipse-public-license-2.0 b/licenses/eclipse-public-license-2.0 new file mode 100644 index 0000000..c9f1425 --- /dev/null +++ b/licenses/eclipse-public-license-2.0 @@ -0,0 +1,87 @@ +Eclipse Public License - v 2.0 + +THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE (“AGREEMENT”). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. +1. DEFINITIONS + +“Contribution” means: + + a) in the case of the initial Contributor, the initial content Distributed under this Agreement, and + b) in the case of each subsequent Contributor: + i) changes to the Program, and + ii) additions to the Program; + where such changes and/or additions to the Program originate from and are Distributed by that particular Contributor. A Contribution “originates” from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include changes or additions to the Program that are not Modified Works. + +“Contributor” means any person or entity that Distributes the Program. + +“Licensed Patents” mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. + +“Program” means the Contributions Distributed in accordance with this Agreement. + +“Recipient” means anyone who receives the Program under this Agreement or any Secondary License (as applicable), including Contributors. + +“Derivative Works” shall mean any work, whether in Source Code or other form, that is based on (or derived from) the Program and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. + +“Modified Works” shall mean any work in Source Code or other form that results from an addition to, deletion from, or modification of the contents of the Program, including, for purposes of clarity any new file in Source Code form that contains any contents of the Program. Modified Works shall not include works that contain only declarations, interfaces, types, classes, structures, or files of the Program solely in each case in order to link to, bind by name, or subclass the Program or Modified Works thereof. + +“Distribute” means the acts of a) distributing or b) making available in any manner that enables the transfer of a copy. + +“Source Code” means the form of a Program preferred for making modifications, including but not limited to software source code, documentation source, and configuration files. + +“Secondary License” means either the GNU General Public License, Version 2.0, or any later versions of that license, including any exceptions or additional permissions as identified by the initial Contributor. +2. GRANT OF RIGHTS + + a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, Distribute and sublicense the Contribution of such Contributor, if any, and such Derivative Works. + b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in Source Code or other form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. + c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to Distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. + d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. + e) Notwithstanding the terms of any Secondary License, no Contributor makes additional grants to any Recipient (other than those set forth in this Agreement) as a result of such Recipient's receipt of the Program under the terms of a Secondary License (if permitted under the terms of Section 3). + +3. REQUIREMENTS + +3.1 If a Contributor Distributes the Program in any form, then: + + a) the Program must also be made available as Source Code, in accordance with section 3.2, and the Contributor must accompany the Program with a statement that the Source Code for the Program is available under this Agreement, and informs Recipients how to obtain it in a reasonable manner on or through a medium customarily used for software exchange; and + b) the Contributor may Distribute the Program under a license different than this Agreement, provided that such license: + i) effectively disclaims on behalf of all other Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; + ii) effectively excludes on behalf of all other Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; + iii) does not attempt to limit or alter the recipients' rights in the Source Code under section 3.2; and + iv) requires any subsequent distribution of the Program by any party to be under a license that satisfies the requirements of this section 3. + +3.2 When the Program is Distributed as Source Code: + + a) it must be made available under this Agreement, or if the Program (i) is combined with other material in a separate file or files made available under a Secondary License, and (ii) the initial Contributor attached to the Source Code the notice described in Exhibit A of this Agreement, then the Program may be made available under the terms of such Secondary Licenses, and + b) a copy of this Agreement must be included with each copy of the Program. + +3.3 Contributors may not remove or alter any copyright, patent, trademark, attribution notices, disclaimers of warranty, or limitations of liability (‘notices’) contained within the Program from any copy of the Program which they Distribute, provided that Contributors may add their own appropriate notices. +4. COMMERCIAL DISTRIBUTION + +Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor (“Commercial Contributor”) hereby agrees to defend and indemnify every other Contributor (“Indemnified Contributor”) against any losses, damages and costs (collectively “Losses”) arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. + +For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. +5. NO WARRANTY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. +6. DISCLAIMER OF LIABILITY + +EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. +7. GENERAL + +If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. + +If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. + +All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. + +Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be Distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to Distribute the Program (including its Contributions) under the new version. + +Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. Nothing in this Agreement is intended to be enforceable by any entity that is not a Contributor or Recipient. No third-party beneficiary rights are created under this Agreement. +Exhibit A – Form of Secondary Licenses Notice + +“This Source Code may also be made available under the following Secondary Licenses when the conditions for such availability set forth in the Eclipse Public License, v. 2.0 are satisfied: {name license(s), version(s), and exceptions or additional permissions here}.” + + Simply including a copy of this Agreement, including this Exhibit A is not sufficient to license the Source Code under Secondary Licenses. + + If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. + + You may add additional accurate notices of copyright ownership. + diff --git a/licenses/jline-BSD-3-clause b/licenses/jline-BSD-3-clause new file mode 100644 index 0000000..7e11b67 --- /dev/null +++ b/licenses/jline-BSD-3-clause @@ -0,0 +1,35 @@ +Copyright (c) 2002-2018, the original author or authors. +All rights reserved. + +https://opensource.org/licenses/BSD-3-Clause + +Redistribution and use in source and binary forms, with or +without modification, are permitted provided that the following +conditions are met: + +Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + +Redistributions in binary form must reproduce the above copyright +notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with +the distribution. + +Neither the name of JLine nor the names of its contributors +may be used to endorse or promote products derived from this +software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, +BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO +EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, +OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED +AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING +IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED +OF THE POSSIBILITY OF SUCH DAMAGE. + diff --git a/licenses/jopt-simple-MIT b/licenses/jopt-simple-MIT new file mode 100644 index 0000000..54b2732 --- /dev/null +++ b/licenses/jopt-simple-MIT @@ -0,0 +1,24 @@ +/* + The MIT License + + Copyright (c) 2004-2016 Paul R. Holser, Jr. + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + "Software"), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be + included in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE + LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION + OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION + WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +*/ diff --git a/licenses/paranamer-BSD-3-clause b/licenses/paranamer-BSD-3-clause new file mode 100644 index 0000000..9eab879 --- /dev/null +++ b/licenses/paranamer-BSD-3-clause @@ -0,0 +1,29 @@ +[ ParaNamer used to be 'Pubic Domain', but since it includes a small piece of ASM it is now the same license as that: BSD ] + + Portions copyright (c) 2006-2018 Paul Hammant & ThoughtWorks Inc + Portions copyright (c) 2000-2007 INRIA, France Telecom + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + 3. Neither the name of the copyright holders nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE + LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF + THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/pcollections-MIT b/licenses/pcollections-MIT new file mode 100644 index 0000000..50519c5 --- /dev/null +++ b/licenses/pcollections-MIT @@ -0,0 +1,24 @@ +MIT License + +Copyright 2008-2011, 2014-2020, 2022 Harold Cooper, gil cattaneo, Gleb Frank, +Günther Grill, Ilya Gorbunov, Jirka Kremser, Jochen Theodorou, Johnny Lim, +Liam Miller, Mark Perry, Matei Dragu, Mike Klein, Oleg Osipenko, Ran Ari-Gur, +Shantanu Kumar, and Valeriy Vyrva. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/licenses/slf4j-MIT b/licenses/slf4j-MIT new file mode 100644 index 0000000..315bd49 --- /dev/null +++ b/licenses/slf4j-MIT @@ -0,0 +1,24 @@ +Copyright (c) 2004-2017 QOS.ch +All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + + diff --git a/licenses/zstd-jni-BSD-2-clause b/licenses/zstd-jni-BSD-2-clause new file mode 100644 index 0000000..66abb8a --- /dev/null +++ b/licenses/zstd-jni-BSD-2-clause @@ -0,0 +1,26 @@ +Zstd-jni: JNI bindings to Zstd Library + +Copyright (c) 2015-present, Luben Karavelov/ All rights reserved. + +BSD License + +Redistribution and use in source and binary forms, with or without modification, +are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, this + list of conditions and the following disclaimer in the documentation and/or + other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/logs/controller.log b/logs/controller.log new file mode 100644 index 0000000..b654633 --- /dev/null +++ b/logs/controller.log @@ -0,0 +1,63 @@ +[2023-11-03 15:24:54,423] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) +[2023-11-03 15:24:54,437] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 2 and epoch zk version is now 2 (kafka.controller.KafkaController) +[2023-11-03 15:24:54,438] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController) +[2023-11-03 15:24:54,442] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController) +[2023-11-03 15:24:54,444] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController) +[2023-11-03 15:24:54,445] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController) +[2023-11-03 15:24:54,456] INFO [Controller id=0] Initialized broker epochs cache: HashMap(0 -> 173) (kafka.controller.KafkaController) +[2023-11-03 15:24:54,464] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController) +[2023-11-03 15:24:54,487] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager) +[2023-11-03 15:24:54,491] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) +[2023-11-03 15:24:54,491] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController) +[2023-11-03 15:24:54,492] INFO [Controller id=0] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) +[2023-11-03 15:24:54,492] INFO [Controller id=0] Current list of topics in the cluster: HashSet(OrderEventQA2, __consumer_offsets) (kafka.controller.KafkaController) +[2023-11-03 15:24:54,492] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController) +[2023-11-03 15:24:54,496] INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,497] INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,497] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController) +[2023-11-03 15:24:54,497] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) +[2023-11-03 15:24:54,498] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController) +[2023-11-03 15:24:54,505] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 15:24:54,509] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 15:24:54,511] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker localhost:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread) +java.io.IOException: Connection to localhost:9092 (id: 0 rack: null) failed. + at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) + at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) + at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) + at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) +[2023-11-03 15:24:54,531] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 15:24:54,531] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> HashMap([Topic=OrderEventQA2,Partition=0,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=40,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=27,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=49,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=47,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=3,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=18,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=44,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=8,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=34,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=25,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=14,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=24,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=36,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=42,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=45,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=11,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=32,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=12,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=30,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=9,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=39,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=38,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=23,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=19,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=17,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=41,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=37,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=48,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=29,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=10,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=46,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=1,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=16,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=5,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=15,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=4,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=6,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=7,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=43,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=0,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=20,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=31,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=28,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=26,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=2,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=33,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=22,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=21,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=13,Replica=0] -> OnlineReplica, [Topic=__consumer_offsets,Partition=35,Replica=0] -> OnlineReplica) (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 15:24:54,531] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 15:24:54,534] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 15:24:54,535] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> HashMap(__consumer_offsets-13 -> OnlinePartition, __consumer_offsets-46 -> OnlinePartition, __consumer_offsets-9 -> OnlinePartition, __consumer_offsets-42 -> OnlinePartition, __consumer_offsets-21 -> OnlinePartition, __consumer_offsets-17 -> OnlinePartition, __consumer_offsets-30 -> OnlinePartition, OrderEventQA2-0 -> OnlinePartition, __consumer_offsets-26 -> OnlinePartition, __consumer_offsets-5 -> OnlinePartition, __consumer_offsets-38 -> OnlinePartition, __consumer_offsets-1 -> OnlinePartition, __consumer_offsets-34 -> OnlinePartition, __consumer_offsets-16 -> OnlinePartition, __consumer_offsets-45 -> OnlinePartition, __consumer_offsets-12 -> OnlinePartition, __consumer_offsets-41 -> OnlinePartition, __consumer_offsets-24 -> OnlinePartition, __consumer_offsets-20 -> OnlinePartition, __consumer_offsets-49 -> OnlinePartition, __consumer_offsets-0 -> OnlinePartition, __consumer_offsets-29 -> OnlinePartition, __consumer_offsets-25 -> OnlinePartition, __consumer_offsets-8 -> OnlinePartition, __consumer_offsets-37 -> OnlinePartition, __consumer_offsets-4 -> OnlinePartition, __consumer_offsets-33 -> OnlinePartition, __consumer_offsets-15 -> OnlinePartition, __consumer_offsets-48 -> OnlinePartition, __consumer_offsets-11 -> OnlinePartition, __consumer_offsets-44 -> OnlinePartition, __consumer_offsets-23 -> OnlinePartition, __consumer_offsets-19 -> OnlinePartition, __consumer_offsets-32 -> OnlinePartition, __consumer_offsets-28 -> OnlinePartition, __consumer_offsets-7 -> OnlinePartition, __consumer_offsets-40 -> OnlinePartition, __consumer_offsets-3 -> OnlinePartition, __consumer_offsets-36 -> OnlinePartition, __consumer_offsets-47 -> OnlinePartition, __consumer_offsets-14 -> OnlinePartition, __consumer_offsets-43 -> OnlinePartition, __consumer_offsets-10 -> OnlinePartition, __consumer_offsets-22 -> OnlinePartition, __consumer_offsets-18 -> OnlinePartition, __consumer_offsets-31 -> OnlinePartition, __consumer_offsets-27 -> OnlinePartition, __consumer_offsets-39 -> OnlinePartition, __consumer_offsets-6 -> OnlinePartition, __consumer_offsets-35 -> OnlinePartition, __consumer_offsets-2 -> OnlinePartition) (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 15:24:54,536] INFO [Controller id=0] Ready to serve as the new controller with epoch 2 (kafka.controller.KafkaController) +[2023-11-03 15:24:54,540] INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,540] INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,540] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,540] INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) +[2023-11-03 15:24:54,541] INFO [Controller id=0] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) +[2023-11-03 15:24:54,548] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController) +[2023-11-03 15:24:54,615] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:9092 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) +[2023-11-03 15:24:59,550] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 15:24:59,551] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 15:24:59,565] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 15:24:59,569] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) +[2023-11-03 15:29:59,570] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 15:29:59,570] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 15:29:59,574] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 15:29:59,574] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) +[2023-11-03 15:34:59,575] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 15:34:59,575] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 15:34:59,578] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 15:34:59,578] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) +[2023-11-03 15:36:27,543] INFO [Controller id=0] New topics: [Set(test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(test-topic,Some(Gh__wBw-TqeS8XzMJZBzeA),Map(test-topic-0 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) +[2023-11-03 15:36:27,545] INFO [Controller id=0] New partition creation callback for test-topic-0 (kafka.controller.KafkaController) +[2023-11-03 15:36:27,580] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:9092 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) +[2023-11-03 15:39:59,579] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 15:39:59,579] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 15:39:59,582] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 15:39:59,582] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) +[2023-11-03 15:44:59,583] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 15:44:59,583] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 15:44:59,585] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 15:44:59,585] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) diff --git a/logs/controller.log.2023-11-03-10 b/logs/controller.log.2023-11-03-10 new file mode 100644 index 0000000..e69de29 diff --git a/logs/controller.log.2023-11-03-14 b/logs/controller.log.2023-11-03-14 new file mode 100644 index 0000000..3c0c507 --- /dev/null +++ b/logs/controller.log.2023-11-03-14 @@ -0,0 +1,75 @@ +[2023-11-03 14:01:23,353] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) +[2023-11-03 14:01:23,369] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) +[2023-11-03 14:01:23,372] INFO [Controller id=0] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) +[2023-11-03 14:01:23,390] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController) +[2023-11-03 14:01:23,393] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController) +[2023-11-03 14:01:23,395] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController) +[2023-11-03 14:01:23,397] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController) +[2023-11-03 14:01:23,408] INFO [Controller id=0] Initialized broker epochs cache: HashMap(0 -> 25) (kafka.controller.KafkaController) +[2023-11-03 14:01:23,413] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController) +[2023-11-03 14:01:23,417] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager) +[2023-11-03 14:01:23,421] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) +[2023-11-03 14:01:23,422] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController) +[2023-11-03 14:01:23,422] INFO [Controller id=0] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) +[2023-11-03 14:01:23,422] INFO [Controller id=0] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) +[2023-11-03 14:01:23,422] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController) +[2023-11-03 14:01:23,424] INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,424] INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,424] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController) +[2023-11-03 14:01:23,425] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) +[2023-11-03 14:01:23,425] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController) +[2023-11-03 14:01:23,435] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 14:01:23,436] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 14:01:23,441] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 14:01:23,441] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 14:01:23,441] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 14:01:23,441] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 14:01:23,443] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 14:01:23,443] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) +[2023-11-03 14:01:23,447] INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,447] INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,447] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,448] INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) +[2023-11-03 14:01:23,448] INFO [Controller id=0] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) +[2023-11-03 14:01:23,454] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker localhost:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread) +java.io.IOException: Connection to localhost:9092 (id: 0 rack: null) failed. + at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) + at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) + at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) + at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) +[2023-11-03 14:01:23,455] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController) +[2023-11-03 14:01:23,557] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:9092 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) +[2023-11-03 14:01:28,456] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:01:28,456] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:06:28,458] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:06:28,458] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:11:28,458] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:11:28,458] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:16:28,459] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:16:28,459] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:21:28,460] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:21:28,460] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:23:03,096] INFO [Controller id=0] New topics: [Set(OrderEventQA2)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(OrderEventQA2,Some(78sflbJnR2GXTOOat5Yo7Q),Map(OrderEventQA2-0 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) +[2023-11-03 14:23:03,097] INFO [Controller id=0] New partition creation callback for OrderEventQA2-0 (kafka.controller.KafkaController) +[2023-11-03 14:23:03,125] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:9092 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) +[2023-11-03 14:23:04,134] INFO [Controller id=0] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(PIKFaFrMTbm2cK6klZ1I7A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) +[2023-11-03 14:23:04,134] INFO [Controller id=0] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) +[2023-11-03 14:26:28,460] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) +[2023-11-03 14:26:28,460] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) +[2023-11-03 14:26:28,462] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) +[2023-11-03 14:26:28,463] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) +[2023-11-03 14:29:33,650] INFO [Controller id=0] Shutting down broker 0 (kafka.controller.KafkaController) +[2023-11-03 14:29:33,651] DEBUG [Controller id=0] All shutting down brokers: 0 (kafka.controller.KafkaController) +[2023-11-03 14:29:33,651] DEBUG [Controller id=0] Live brokers: (kafka.controller.KafkaController) +[2023-11-03 14:29:33,654] TRACE [Controller id=0] All leaders = __consumer_offsets-13 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-46 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-9 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-42 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-21 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-17 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-30 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),OrderEventQA2-0 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-26 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-5 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-38 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-1 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-34 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-16 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-45 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-12 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-41 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-24 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-20 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-49 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-0 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-29 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-25 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-8 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-37 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-4 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-33 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-15 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-48 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-11 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-44 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-23 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-19 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-32 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-28 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-7 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-40 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-3 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-36 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-47 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-14 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-43 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-10 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-22 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-18 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-31 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-27 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-39 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-6 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-35 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1),__consumer_offsets-2 -> (Leader:0,ISR:0,LeaderRecoveryState:RECOVERED,LeaderEpoch:0,ZkVersion:0,ControllerEpoch:1) (kafka.controller.KafkaController) +[2023-11-03 14:29:33,699] INFO [ControllerEventThread controllerId=0] Shutting down (kafka.controller.ControllerEventManager$ControllerEventThread) +[2023-11-03 14:29:33,699] INFO [ControllerEventThread controllerId=0] Stopped (kafka.controller.ControllerEventManager$ControllerEventThread) +[2023-11-03 14:29:33,699] INFO [ControllerEventThread controllerId=0] Shutdown completed (kafka.controller.ControllerEventManager$ControllerEventThread) +[2023-11-03 14:29:33,699] DEBUG [Controller id=0] Resigning (kafka.controller.KafkaController) +[2023-11-03 14:29:33,699] DEBUG [Controller id=0] Unregister BrokerModifications handler for Set(0) (kafka.controller.KafkaController) +[2023-11-03 14:29:33,700] INFO [PartitionStateMachine controllerId=0] Stopped partition state machine (kafka.controller.ZkPartitionStateMachine) +[2023-11-03 14:29:33,700] INFO [ReplicaStateMachine controllerId=0] Stopped replica state machine (kafka.controller.ZkReplicaStateMachine) +[2023-11-03 14:29:33,701] INFO [RequestSendThread controllerId=0] Shutting down (kafka.controller.RequestSendThread) +[2023-11-03 14:29:33,701] INFO [RequestSendThread controllerId=0] Stopped (kafka.controller.RequestSendThread) +[2023-11-03 14:29:33,701] INFO [RequestSendThread controllerId=0] Shutdown completed (kafka.controller.RequestSendThread) +[2023-11-03 14:29:33,701] INFO [Controller id=0] Resigned (kafka.controller.KafkaController) diff --git a/logs/kafka-authorizer.log b/logs/kafka-authorizer.log new file mode 100644 index 0000000..e69de29 diff --git a/logs/kafka-request.log b/logs/kafka-request.log new file mode 100644 index 0000000..e69de29 diff --git a/logs/kafkaServer-gc.log b/logs/kafkaServer-gc.log new file mode 100644 index 0000000..a0d0a36 --- /dev/null +++ b/logs/kafkaServer-gc.log @@ -0,0 +1,140 @@ +[2023-11-03T15:24:53.114-0400][gc] Using G1 +[2023-11-03T15:24:53.140-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:24:53.140-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:24:53.140-0400][gc,init] Memory: 63941M +[2023-11-03T15:24:53.140-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:24:53.140-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:24:53.140-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:24:53.140-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:24:53.140-0400][gc,init] Heap Min Capacity: 1G +[2023-11-03T15:24:53.140-0400][gc,init] Heap Initial Capacity: 1G +[2023-11-03T15:24:53.140-0400][gc,init] Heap Max Capacity: 1G +[2023-11-03T15:24:53.140-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:24:53.140-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:24:53.140-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:24:53.140-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:24:53.140-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:24:53.141-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:24:53.141-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:24:53.141-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:24:53.830-0400][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:24:53.831-0400][gc,task ] GC(0) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:53.838-0400][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:53.838-0400][gc,phases ] GC(0) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:53.838-0400][gc,phases ] GC(0) Evacuate Collection Set: 6.0ms +[2023-11-03T15:24:53.838-0400][gc,phases ] GC(0) Post Evacuate Collection Set: 0.6ms +[2023-11-03T15:24:53.838-0400][gc,phases ] GC(0) Other: 1.0ms +[2023-11-03T15:24:53.838-0400][gc,heap ] GC(0) Eden regions: 51->0(44) +[2023-11-03T15:24:53.838-0400][gc,heap ] GC(0) Survivor regions: 0->7(7) +[2023-11-03T15:24:53.838-0400][gc,heap ] GC(0) Old regions: 0->1 +[2023-11-03T15:24:53.838-0400][gc,heap ] GC(0) Archive regions: 2->2 +[2023-11-03T15:24:53.838-0400][gc,heap ] GC(0) Humongous regions: 0->0 +[2023-11-03T15:24:53.838-0400][gc,metaspace] GC(0) Metaspace: 18467K(18688K)->18467K(18688K) NonClass: 16291K(16384K)->16291K(16384K) Class: 2175K(2304K)->2175K(2304K) +[2023-11-03T15:24:53.838-0400][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 51M->8M(1024M) 7.835ms +[2023-11-03T15:24:53.838-0400][gc,cpu ] GC(0) User=0.05s Sys=0.00s Real=0.01s +[2023-11-03T15:24:53.965-0400][gc,start ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) +[2023-11-03T15:24:53.965-0400][gc,task ] GC(1) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:53.972-0400][gc,phases ] GC(1) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:53.972-0400][gc,phases ] GC(1) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:53.972-0400][gc,phases ] GC(1) Evacuate Collection Set: 6.2ms +[2023-11-03T15:24:53.972-0400][gc,phases ] GC(1) Post Evacuate Collection Set: 0.5ms +[2023-11-03T15:24:53.972-0400][gc,phases ] GC(1) Other: 0.1ms +[2023-11-03T15:24:53.972-0400][gc,heap ] GC(1) Eden regions: 11->0(49) +[2023-11-03T15:24:53.972-0400][gc,heap ] GC(1) Survivor regions: 7->2(7) +[2023-11-03T15:24:53.972-0400][gc,heap ] GC(1) Old regions: 1->8 +[2023-11-03T15:24:53.972-0400][gc,heap ] GC(1) Archive regions: 2->2 +[2023-11-03T15:24:53.972-0400][gc,heap ] GC(1) Humongous regions: 0->0 +[2023-11-03T15:24:53.972-0400][gc,metaspace] GC(1) Metaspace: 21300K(21504K)->21300K(21504K) NonClass: 18723K(18816K)->18723K(18816K) Class: 2576K(2688K)->2576K(2688K) +[2023-11-03T15:24:53.972-0400][gc ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) 19M->9M(1024M) 7.071ms +[2023-11-03T15:24:53.972-0400][gc,cpu ] GC(1) User=0.07s Sys=0.00s Real=0.01s +[2023-11-03T15:24:53.972-0400][gc ] GC(2) Concurrent Mark Cycle +[2023-11-03T15:24:53.972-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks +[2023-11-03T15:24:53.972-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks 0.013ms +[2023-11-03T15:24:53.972-0400][gc,marking ] GC(2) Concurrent Scan Root Regions +[2023-11-03T15:24:53.974-0400][gc,marking ] GC(2) Concurrent Scan Root Regions 1.582ms +[2023-11-03T15:24:53.974-0400][gc,marking ] GC(2) Concurrent Mark +[2023-11-03T15:24:53.974-0400][gc,marking ] GC(2) Concurrent Mark From Roots +[2023-11-03T15:24:53.974-0400][gc,task ] GC(2) Using 3 workers of 3 for marking +[2023-11-03T15:24:53.975-0400][gc,marking ] GC(2) Concurrent Mark From Roots 1.142ms +[2023-11-03T15:24:53.975-0400][gc,marking ] GC(2) Concurrent Preclean +[2023-11-03T15:24:53.975-0400][gc,marking ] GC(2) Concurrent Preclean 0.054ms +[2023-11-03T15:24:53.975-0400][gc,start ] GC(2) Pause Remark +[2023-11-03T15:24:53.976-0400][gc ] GC(2) Pause Remark 10M->10M(1024M) 0.848ms +[2023-11-03T15:24:53.976-0400][gc,cpu ] GC(2) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:53.976-0400][gc,marking ] GC(2) Concurrent Mark 2.135ms +[2023-11-03T15:24:53.976-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets +[2023-11-03T15:24:53.977-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets 1.291ms +[2023-11-03T15:24:53.977-0400][gc,start ] GC(2) Pause Cleanup +[2023-11-03T15:24:53.978-0400][gc ] GC(2) Pause Cleanup 10M->10M(1024M) 0.069ms +[2023-11-03T15:24:53.978-0400][gc,cpu ] GC(2) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:53.978-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark +[2023-11-03T15:24:53.979-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark 1.384ms +[2023-11-03T15:24:53.979-0400][gc ] GC(2) Concurrent Mark Cycle 6.692ms +[2023-11-03T15:24:54.408-0400][gc,start ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:24:54.408-0400][gc,task ] GC(3) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:54.411-0400][gc,phases ] GC(3) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:54.411-0400][gc,phases ] GC(3) Merge Heap Roots: 0.2ms +[2023-11-03T15:24:54.411-0400][gc,phases ] GC(3) Evacuate Collection Set: 1.8ms +[2023-11-03T15:24:54.411-0400][gc,phases ] GC(3) Post Evacuate Collection Set: 0.3ms +[2023-11-03T15:24:54.411-0400][gc,phases ] GC(3) Other: 0.1ms +[2023-11-03T15:24:54.411-0400][gc,heap ] GC(3) Eden regions: 49->0(45) +[2023-11-03T15:24:54.411-0400][gc,heap ] GC(3) Survivor regions: 2->6(7) +[2023-11-03T15:24:54.411-0400][gc,heap ] GC(3) Old regions: 8->8 +[2023-11-03T15:24:54.411-0400][gc,heap ] GC(3) Archive regions: 2->2 +[2023-11-03T15:24:54.411-0400][gc,heap ] GC(3) Humongous regions: 129->129 +[2023-11-03T15:24:54.411-0400][gc,metaspace] GC(3) Metaspace: 30891K(31232K)->30891K(31232K) NonClass: 27404K(27584K)->27404K(27584K) Class: 3486K(3648K)->3486K(3648K) +[2023-11-03T15:24:54.411-0400][gc ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) 187M->143M(1024M) 2.542ms +[2023-11-03T15:24:54.411-0400][gc,cpu ] GC(3) User=0.02s Sys=0.00s Real=0.00s +[2023-11-03T15:24:54.625-0400][gc,start ] GC(4) Pause Young (Concurrent Start) (Metadata GC Threshold) +[2023-11-03T15:24:54.625-0400][gc,task ] GC(4) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:54.631-0400][gc,phases ] GC(4) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:54.631-0400][gc,phases ] GC(4) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:54.631-0400][gc,phases ] GC(4) Evacuate Collection Set: 5.0ms +[2023-11-03T15:24:54.631-0400][gc,phases ] GC(4) Post Evacuate Collection Set: 0.4ms +[2023-11-03T15:24:54.631-0400][gc,phases ] GC(4) Other: 0.1ms +[2023-11-03T15:24:54.631-0400][gc,heap ] GC(4) Eden regions: 21->0(49) +[2023-11-03T15:24:54.631-0400][gc,heap ] GC(4) Survivor regions: 6->2(7) +[2023-11-03T15:24:54.631-0400][gc,heap ] GC(4) Old regions: 8->14 +[2023-11-03T15:24:54.631-0400][gc,heap ] GC(4) Archive regions: 2->2 +[2023-11-03T15:24:54.631-0400][gc,heap ] GC(4) Humongous regions: 129->129 +[2023-11-03T15:24:54.631-0400][gc,metaspace] GC(4) Metaspace: 35693K(35968K)->35693K(35968K) NonClass: 31640K(31808K)->31640K(31808K) Class: 4052K(4160K)->4052K(4160K) +[2023-11-03T15:24:54.631-0400][gc ] GC(4) Pause Young (Concurrent Start) (Metadata GC Threshold) 163M->144M(1024M) 5.708ms +[2023-11-03T15:24:54.631-0400][gc,cpu ] GC(4) User=0.04s Sys=0.00s Real=0.00s +[2023-11-03T15:24:54.631-0400][gc ] GC(5) Concurrent Mark Cycle +[2023-11-03T15:24:54.631-0400][gc,marking ] GC(5) Concurrent Clear Claimed Marks +[2023-11-03T15:24:54.631-0400][gc,marking ] GC(5) Concurrent Clear Claimed Marks 0.021ms +[2023-11-03T15:24:54.631-0400][gc,marking ] GC(5) Concurrent Scan Root Regions +[2023-11-03T15:24:54.632-0400][gc,marking ] GC(5) Concurrent Scan Root Regions 1.590ms +[2023-11-03T15:24:54.632-0400][gc,marking ] GC(5) Concurrent Mark +[2023-11-03T15:24:54.632-0400][gc,marking ] GC(5) Concurrent Mark From Roots +[2023-11-03T15:24:54.632-0400][gc,task ] GC(5) Using 3 workers of 3 for marking +[2023-11-03T15:24:54.636-0400][gc,marking ] GC(5) Concurrent Mark From Roots 3.886ms +[2023-11-03T15:24:54.636-0400][gc,marking ] GC(5) Concurrent Preclean +[2023-11-03T15:24:54.637-0400][gc,marking ] GC(5) Concurrent Preclean 0.113ms +[2023-11-03T15:24:54.637-0400][gc,start ] GC(5) Pause Remark +[2023-11-03T15:24:54.638-0400][gc ] GC(5) Pause Remark 145M->145M(1024M) 0.913ms +[2023-11-03T15:24:54.638-0400][gc,cpu ] GC(5) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:54.638-0400][gc,marking ] GC(5) Concurrent Mark 5.054ms +[2023-11-03T15:24:54.638-0400][gc,marking ] GC(5) Concurrent Rebuild Remembered Sets +[2023-11-03T15:24:54.641-0400][gc,marking ] GC(5) Concurrent Rebuild Remembered Sets 3.181ms +[2023-11-03T15:24:54.641-0400][gc,start ] GC(5) Pause Cleanup +[2023-11-03T15:24:54.641-0400][gc ] GC(5) Pause Cleanup 145M->145M(1024M) 0.110ms +[2023-11-03T15:24:54.641-0400][gc,cpu ] GC(5) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:54.641-0400][gc,marking ] GC(5) Concurrent Cleanup for Next Mark +[2023-11-03T15:24:54.642-0400][gc,marking ] GC(5) Concurrent Cleanup for Next Mark 1.322ms +[2023-11-03T15:24:54.642-0400][gc ] GC(5) Concurrent Mark Cycle 11.546ms +[2023-11-03T15:41:52.455-0400][gc,start ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:41:52.455-0400][gc,task ] GC(6) Using 10 workers of 10 for evacuation +[2023-11-03T15:41:52.457-0400][gc,phases ] GC(6) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:41:52.457-0400][gc,phases ] GC(6) Merge Heap Roots: 0.1ms +[2023-11-03T15:41:52.457-0400][gc,phases ] GC(6) Evacuate Collection Set: 1.6ms +[2023-11-03T15:41:52.457-0400][gc,phases ] GC(6) Post Evacuate Collection Set: 0.5ms +[2023-11-03T15:41:52.457-0400][gc,phases ] GC(6) Other: 0.1ms +[2023-11-03T15:41:52.457-0400][gc,heap ] GC(6) Eden regions: 49->0(47) +[2023-11-03T15:41:52.457-0400][gc,heap ] GC(6) Survivor regions: 2->4(7) +[2023-11-03T15:41:52.457-0400][gc,heap ] GC(6) Old regions: 14->14 +[2023-11-03T15:41:52.457-0400][gc,heap ] GC(6) Archive regions: 2->2 +[2023-11-03T15:41:52.457-0400][gc,heap ] GC(6) Humongous regions: 135->129 +[2023-11-03T15:41:52.457-0400][gc,metaspace] GC(6) Metaspace: 39954K(40448K)->39954K(40448K) NonClass: 35158K(35456K)->35158K(35456K) Class: 4796K(4992K)->4796K(4992K) +[2023-11-03T15:41:52.457-0400][gc ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) 199M->146M(1024M) 2.349ms +[2023-11-03T15:41:52.457-0400][gc,cpu ] GC(6) User=0.00s Sys=0.02s Real=0.00s diff --git a/logs/kafkaServer-gc.log.0 b/logs/kafkaServer-gc.log.0 new file mode 100644 index 0000000..33829c5 --- /dev/null +++ b/logs/kafkaServer-gc.log.0 @@ -0,0 +1,145 @@ +[2023-11-03T14:01:22.212-0400][gc] Using G1 +[2023-11-03T14:01:22.218-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T14:01:22.218-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T14:01:22.218-0400][gc,init] Memory: 63941M +[2023-11-03T14:01:22.218-0400][gc,init] Large Page Support: Disabled +[2023-11-03T14:01:22.218-0400][gc,init] NUMA Support: Disabled +[2023-11-03T14:01:22.218-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T14:01:22.218-0400][gc,init] Heap Region Size: 1M +[2023-11-03T14:01:22.218-0400][gc,init] Heap Min Capacity: 1G +[2023-11-03T14:01:22.218-0400][gc,init] Heap Initial Capacity: 1G +[2023-11-03T14:01:22.218-0400][gc,init] Heap Max Capacity: 1G +[2023-11-03T14:01:22.218-0400][gc,init] Pre-touch: Disabled +[2023-11-03T14:01:22.218-0400][gc,init] Parallel Workers: 10 +[2023-11-03T14:01:22.218-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T14:01:22.218-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T14:01:22.218-0400][gc,init] Periodic GC: Disabled +[2023-11-03T14:01:22.218-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T14:01:22.218-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T14:01:22.218-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T14:01:22.915-0400][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T14:01:22.916-0400][gc,task ] GC(0) Using 10 workers of 10 for evacuation +[2023-11-03T14:01:22.922-0400][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T14:01:22.922-0400][gc,phases ] GC(0) Merge Heap Roots: 0.1ms +[2023-11-03T14:01:22.922-0400][gc,phases ] GC(0) Evacuate Collection Set: 6.0ms +[2023-11-03T14:01:22.922-0400][gc,phases ] GC(0) Post Evacuate Collection Set: 0.6ms +[2023-11-03T14:01:22.922-0400][gc,phases ] GC(0) Other: 0.8ms +[2023-11-03T14:01:22.922-0400][gc,heap ] GC(0) Eden regions: 51->0(44) +[2023-11-03T14:01:22.922-0400][gc,heap ] GC(0) Survivor regions: 0->7(7) +[2023-11-03T14:01:22.922-0400][gc,heap ] GC(0) Old regions: 0->1 +[2023-11-03T14:01:22.922-0400][gc,heap ] GC(0) Archive regions: 2->2 +[2023-11-03T14:01:22.922-0400][gc,heap ] GC(0) Humongous regions: 0->0 +[2023-11-03T14:01:22.922-0400][gc,metaspace] GC(0) Metaspace: 18650K(18880K)->18650K(18880K) NonClass: 16438K(16576K)->16438K(16576K) Class: 2211K(2304K)->2211K(2304K) +[2023-11-03T14:01:22.922-0400][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 51M->8M(1024M) 7.631ms +[2023-11-03T14:01:22.922-0400][gc,cpu ] GC(0) User=0.05s Sys=0.01s Real=0.01s +[2023-11-03T14:01:23.047-0400][gc,start ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) +[2023-11-03T14:01:23.047-0400][gc,task ] GC(1) Using 10 workers of 10 for evacuation +[2023-11-03T14:01:23.054-0400][gc,phases ] GC(1) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T14:01:23.054-0400][gc,phases ] GC(1) Merge Heap Roots: 0.1ms +[2023-11-03T14:01:23.054-0400][gc,phases ] GC(1) Evacuate Collection Set: 6.1ms +[2023-11-03T14:01:23.054-0400][gc,phases ] GC(1) Post Evacuate Collection Set: 0.4ms +[2023-11-03T14:01:23.054-0400][gc,phases ] GC(1) Other: 0.2ms +[2023-11-03T14:01:23.054-0400][gc,heap ] GC(1) Eden regions: 12->0(49) +[2023-11-03T14:01:23.054-0400][gc,heap ] GC(1) Survivor regions: 7->2(7) +[2023-11-03T14:01:23.054-0400][gc,heap ] GC(1) Old regions: 1->8 +[2023-11-03T14:01:23.054-0400][gc,heap ] GC(1) Archive regions: 2->2 +[2023-11-03T14:01:23.054-0400][gc,heap ] GC(1) Humongous regions: 129->129 +[2023-11-03T14:01:23.054-0400][gc,metaspace] GC(1) Metaspace: 21248K(21504K)->21248K(21504K) NonClass: 18697K(18816K)->18697K(18816K) Class: 2551K(2688K)->2551K(2688K) +[2023-11-03T14:01:23.054-0400][gc ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) 148M->139M(1024M) 7.020ms +[2023-11-03T14:01:23.054-0400][gc,cpu ] GC(1) User=0.02s Sys=0.05s Real=0.01s +[2023-11-03T14:01:23.054-0400][gc ] GC(2) Concurrent Mark Cycle +[2023-11-03T14:01:23.054-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks +[2023-11-03T14:01:23.054-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks 0.011ms +[2023-11-03T14:01:23.054-0400][gc,marking ] GC(2) Concurrent Scan Root Regions +[2023-11-03T14:01:23.055-0400][gc,marking ] GC(2) Concurrent Scan Root Regions 1.129ms +[2023-11-03T14:01:23.055-0400][gc,marking ] GC(2) Concurrent Mark +[2023-11-03T14:01:23.055-0400][gc,marking ] GC(2) Concurrent Mark From Roots +[2023-11-03T14:01:23.055-0400][gc,task ] GC(2) Using 3 workers of 3 for marking +[2023-11-03T14:01:23.056-0400][gc,marking ] GC(2) Concurrent Mark From Roots 0.992ms +[2023-11-03T14:01:23.056-0400][gc,marking ] GC(2) Concurrent Preclean +[2023-11-03T14:01:23.056-0400][gc,marking ] GC(2) Concurrent Preclean 0.104ms +[2023-11-03T14:01:23.057-0400][gc,start ] GC(2) Pause Remark +[2023-11-03T14:01:23.057-0400][gc ] GC(2) Pause Remark 139M->139M(1024M) 0.987ms +[2023-11-03T14:01:23.058-0400][gc,cpu ] GC(2) User=0.01s Sys=0.00s Real=0.00s +[2023-11-03T14:01:23.058-0400][gc,marking ] GC(2) Concurrent Mark 2.226ms +[2023-11-03T14:01:23.058-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets +[2023-11-03T14:01:23.059-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets 1.124ms +[2023-11-03T14:01:23.059-0400][gc,start ] GC(2) Pause Cleanup +[2023-11-03T14:01:23.059-0400][gc ] GC(2) Pause Cleanup 139M->139M(1024M) 0.156ms +[2023-11-03T14:01:23.059-0400][gc,cpu ] GC(2) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T14:01:23.059-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark +[2023-11-03T14:01:23.061-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark 1.751ms +[2023-11-03T14:01:23.061-0400][gc ] GC(2) Concurrent Mark Cycle 6.673ms +[2023-11-03T14:01:23.431-0400][gc,start ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T14:01:23.431-0400][gc,task ] GC(3) Using 10 workers of 10 for evacuation +[2023-11-03T14:01:23.434-0400][gc,phases ] GC(3) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T14:01:23.434-0400][gc,phases ] GC(3) Merge Heap Roots: 0.1ms +[2023-11-03T14:01:23.434-0400][gc,phases ] GC(3) Evacuate Collection Set: 2.1ms +[2023-11-03T14:01:23.434-0400][gc,phases ] GC(3) Post Evacuate Collection Set: 0.3ms +[2023-11-03T14:01:23.434-0400][gc,phases ] GC(3) Other: 0.1ms +[2023-11-03T14:01:23.434-0400][gc,heap ] GC(3) Eden regions: 49->0(45) +[2023-11-03T14:01:23.434-0400][gc,heap ] GC(3) Survivor regions: 2->6(7) +[2023-11-03T14:01:23.434-0400][gc,heap ] GC(3) Old regions: 8->8 +[2023-11-03T14:01:23.434-0400][gc,heap ] GC(3) Archive regions: 2->2 +[2023-11-03T14:01:23.434-0400][gc,heap ] GC(3) Humongous regions: 129->129 +[2023-11-03T14:01:23.434-0400][gc,metaspace] GC(3) Metaspace: 32525K(32768K)->32525K(32768K) NonClass: 28907K(29056K)->28907K(29056K) Class: 3617K(3712K)->3617K(3712K) +[2023-11-03T14:01:23.434-0400][gc ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) 188M->143M(1024M) 2.766ms +[2023-11-03T14:01:23.434-0400][gc,cpu ] GC(3) User=0.02s Sys=0.00s Real=0.00s +[2023-11-03T14:23:03.115-0400][gc,start ] GC(4) Pause Young (Concurrent Start) (Metadata GC Threshold) +[2023-11-03T14:23:03.115-0400][gc,task ] GC(4) Using 10 workers of 10 for evacuation +[2023-11-03T14:23:03.121-0400][gc,phases ] GC(4) Pre Evacuate Collection Set: 0.2ms +[2023-11-03T14:23:03.121-0400][gc,phases ] GC(4) Merge Heap Roots: 0.1ms +[2023-11-03T14:23:03.121-0400][gc,phases ] GC(4) Evacuate Collection Set: 5.7ms +[2023-11-03T14:23:03.121-0400][gc,phases ] GC(4) Post Evacuate Collection Set: 0.4ms +[2023-11-03T14:23:03.121-0400][gc,phases ] GC(4) Other: 0.2ms +[2023-11-03T14:23:03.121-0400][gc,heap ] GC(4) Eden regions: 23->0(49) +[2023-11-03T14:23:03.121-0400][gc,heap ] GC(4) Survivor regions: 6->2(7) +[2023-11-03T14:23:03.121-0400][gc,heap ] GC(4) Old regions: 8->14 +[2023-11-03T14:23:03.121-0400][gc,heap ] GC(4) Archive regions: 2->2 +[2023-11-03T14:23:03.121-0400][gc,heap ] GC(4) Humongous regions: 129->129 +[2023-11-03T14:23:03.121-0400][gc,metaspace] GC(4) Metaspace: 35665K(35968K)->35665K(35968K) NonClass: 31538K(31680K)->31538K(31680K) Class: 4127K(4288K)->4127K(4288K) +[2023-11-03T14:23:03.121-0400][gc ] GC(4) Pause Young (Concurrent Start) (Metadata GC Threshold) 166M->144M(1024M) 6.584ms +[2023-11-03T14:23:03.121-0400][gc,cpu ] GC(4) User=0.06s Sys=0.02s Real=0.01s +[2023-11-03T14:23:03.121-0400][gc ] GC(5) Concurrent Mark Cycle +[2023-11-03T14:23:03.121-0400][gc,marking ] GC(5) Concurrent Clear Claimed Marks +[2023-11-03T14:23:03.121-0400][gc,marking ] GC(5) Concurrent Clear Claimed Marks 0.041ms +[2023-11-03T14:23:03.121-0400][gc,marking ] GC(5) Concurrent Scan Root Regions +[2023-11-03T14:23:03.123-0400][gc,marking ] GC(5) Concurrent Scan Root Regions 1.508ms +[2023-11-03T14:23:03.123-0400][gc,marking ] GC(5) Concurrent Mark +[2023-11-03T14:23:03.123-0400][gc,marking ] GC(5) Concurrent Mark From Roots +[2023-11-03T14:23:03.123-0400][gc,task ] GC(5) Using 3 workers of 3 for marking +[2023-11-03T14:23:03.126-0400][gc,marking ] GC(5) Concurrent Mark From Roots 3.390ms +[2023-11-03T14:23:03.126-0400][gc,marking ] GC(5) Concurrent Preclean +[2023-11-03T14:23:03.127-0400][gc,marking ] GC(5) Concurrent Preclean 0.282ms +[2023-11-03T14:23:03.127-0400][gc,start ] GC(5) Pause Remark +[2023-11-03T14:23:03.129-0400][gc ] GC(5) Pause Remark 144M->144M(1024M) 2.056ms +[2023-11-03T14:23:03.129-0400][gc,cpu ] GC(5) User=0.01s Sys=0.01s Real=0.00s +[2023-11-03T14:23:03.129-0400][gc,marking ] GC(5) Concurrent Mark 5.975ms +[2023-11-03T14:23:03.129-0400][gc,marking ] GC(5) Concurrent Rebuild Remembered Sets +[2023-11-03T14:23:03.132-0400][gc,marking ] GC(5) Concurrent Rebuild Remembered Sets 3.414ms +[2023-11-03T14:23:03.132-0400][gc,start ] GC(5) Pause Cleanup +[2023-11-03T14:23:03.133-0400][gc ] GC(5) Pause Cleanup 145M->145M(1024M) 0.205ms +[2023-11-03T14:23:03.133-0400][gc,cpu ] GC(5) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T14:23:03.133-0400][gc,marking ] GC(5) Concurrent Cleanup for Next Mark +[2023-11-03T14:23:03.135-0400][gc,marking ] GC(5) Concurrent Cleanup for Next Mark 2.750ms +[2023-11-03T14:23:03.135-0400][gc ] GC(5) Concurrent Mark Cycle 14.140ms +[2023-11-03T14:28:33.190-0400][gc,start ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T14:28:33.190-0400][gc,task ] GC(6) Using 10 workers of 10 for evacuation +[2023-11-03T14:28:33.193-0400][gc,phases ] GC(6) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T14:28:33.193-0400][gc,phases ] GC(6) Merge Heap Roots: 0.1ms +[2023-11-03T14:28:33.193-0400][gc,phases ] GC(6) Evacuate Collection Set: 2.0ms +[2023-11-03T14:28:33.193-0400][gc,phases ] GC(6) Post Evacuate Collection Set: 0.7ms +[2023-11-03T14:28:33.193-0400][gc,phases ] GC(6) Other: 0.1ms +[2023-11-03T14:28:33.193-0400][gc,heap ] GC(6) Eden regions: 49->0(46) +[2023-11-03T14:28:33.193-0400][gc,heap ] GC(6) Survivor regions: 2->5(7) +[2023-11-03T14:28:33.193-0400][gc,heap ] GC(6) Old regions: 14->14 +[2023-11-03T14:28:33.193-0400][gc,heap ] GC(6) Archive regions: 2->2 +[2023-11-03T14:28:33.193-0400][gc,heap ] GC(6) Humongous regions: 129->129 +[2023-11-03T14:28:33.193-0400][gc,metaspace] GC(6) Metaspace: 40756K(41344K)->40756K(41344K) NonClass: 35922K(36288K)->35922K(36288K) Class: 4834K(5056K)->4834K(5056K) +[2023-11-03T14:28:33.193-0400][gc ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) 193M->147M(1024M) 3.085ms +[2023-11-03T14:28:33.193-0400][gc,cpu ] GC(6) User=0.02s Sys=0.00s Real=0.01s +[2023-11-03T14:29:33.966-0400][gc,heap,exit] Heap +[2023-11-03T14:29:33.966-0400][gc,heap,exit] garbage-first heap total 1048576K, used 165175K [0x00000000c0000000, 0x0000000100000000) +[2023-11-03T14:29:33.966-0400][gc,heap,exit] region size 1024K, 19 young (19456K), 5 survivors (5120K) +[2023-11-03T14:29:33.966-0400][gc,heap,exit] Metaspace used 41526K, committed 42048K, reserved 1089536K +[2023-11-03T14:29:33.966-0400][gc,heap,exit] class space used 4992K, committed 5184K, reserved 1048576K diff --git a/logs/kafkaServer-gc.log.1 b/logs/kafkaServer-gc.log.1 new file mode 100644 index 0000000..1e2aeb1 --- /dev/null +++ b/logs/kafkaServer-gc.log.1 @@ -0,0 +1,24 @@ +[2023-11-03T15:23:44.235-0400][gc] Using G1 +[2023-11-03T15:23:44.262-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:23:44.262-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:23:44.263-0400][gc,init] Memory: 63941M +[2023-11-03T15:23:44.263-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:23:44.263-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:23:44.263-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:23:44.263-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:23:44.263-0400][gc,init] Heap Min Capacity: 1G +[2023-11-03T15:23:44.263-0400][gc,init] Heap Initial Capacity: 1G +[2023-11-03T15:23:44.263-0400][gc,init] Heap Max Capacity: 1G +[2023-11-03T15:23:44.263-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:23:44.263-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:23:44.263-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:23:44.263-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:23:44.263-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:23:44.263-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:23:44.263-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:23:44.263-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:24:02.204-0400][gc,heap,exit] Heap +[2023-11-03T15:24:02.204-0400][gc,heap,exit] garbage-first heap total 1048576K, used 41442K [0x00000000c0000000, 0x0000000100000000) +[2023-11-03T15:24:02.204-0400][gc,heap,exit] region size 1024K, 40 young (40960K), 0 survivors (0K) +[2023-11-03T15:24:02.204-0400][gc,heap,exit] Metaspace used 14486K, committed 14720K, reserved 1064960K +[2023-11-03T15:24:02.204-0400][gc,heap,exit] class space used 1743K, committed 1856K, reserved 1048576K diff --git a/logs/kafkaServer-gc.log.2 b/logs/kafkaServer-gc.log.2 new file mode 100644 index 0000000..ca5e164 --- /dev/null +++ b/logs/kafkaServer-gc.log.2 @@ -0,0 +1,92 @@ +[2023-11-03T15:24:34.512-0400][gc] Using G1 +[2023-11-03T15:24:34.539-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:24:34.539-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:24:34.539-0400][gc,init] Memory: 63941M +[2023-11-03T15:24:34.539-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:24:34.539-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:24:34.539-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:24:34.539-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:24:34.539-0400][gc,init] Heap Min Capacity: 1G +[2023-11-03T15:24:34.539-0400][gc,init] Heap Initial Capacity: 1G +[2023-11-03T15:24:34.540-0400][gc,init] Heap Max Capacity: 1G +[2023-11-03T15:24:34.540-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:24:34.540-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:24:34.540-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:24:34.540-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:24:34.540-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:24:34.540-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:24:34.540-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:24:34.540-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:24:35.240-0400][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:24:35.241-0400][gc,task ] GC(0) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:35.247-0400][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:35.247-0400][gc,phases ] GC(0) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:35.247-0400][gc,phases ] GC(0) Evacuate Collection Set: 5.4ms +[2023-11-03T15:24:35.247-0400][gc,phases ] GC(0) Post Evacuate Collection Set: 0.5ms +[2023-11-03T15:24:35.247-0400][gc,phases ] GC(0) Other: 0.9ms +[2023-11-03T15:24:35.247-0400][gc,heap ] GC(0) Eden regions: 51->0(44) +[2023-11-03T15:24:35.247-0400][gc,heap ] GC(0) Survivor regions: 0->7(7) +[2023-11-03T15:24:35.247-0400][gc,heap ] GC(0) Old regions: 0->1 +[2023-11-03T15:24:35.247-0400][gc,heap ] GC(0) Archive regions: 2->2 +[2023-11-03T15:24:35.247-0400][gc,heap ] GC(0) Humongous regions: 0->0 +[2023-11-03T15:24:35.247-0400][gc,metaspace] GC(0) Metaspace: 18383K(18624K)->18383K(18624K) NonClass: 16218K(16320K)->16218K(16320K) Class: 2165K(2304K)->2165K(2304K) +[2023-11-03T15:24:35.247-0400][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 51M->8M(1024M) 6.993ms +[2023-11-03T15:24:35.247-0400][gc,cpu ] GC(0) User=0.05s Sys=0.00s Real=0.01s +[2023-11-03T15:24:35.387-0400][gc,start ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) +[2023-11-03T15:24:35.387-0400][gc,task ] GC(1) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:35.392-0400][gc,phases ] GC(1) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:35.392-0400][gc,phases ] GC(1) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:35.392-0400][gc,phases ] GC(1) Evacuate Collection Set: 5.2ms +[2023-11-03T15:24:35.392-0400][gc,phases ] GC(1) Post Evacuate Collection Set: 0.4ms +[2023-11-03T15:24:35.392-0400][gc,phases ] GC(1) Other: 0.1ms +[2023-11-03T15:24:35.392-0400][gc,heap ] GC(1) Eden regions: 11->0(49) +[2023-11-03T15:24:35.392-0400][gc,heap ] GC(1) Survivor regions: 7->2(7) +[2023-11-03T15:24:35.392-0400][gc,heap ] GC(1) Old regions: 1->8 +[2023-11-03T15:24:35.392-0400][gc,heap ] GC(1) Archive regions: 2->2 +[2023-11-03T15:24:35.392-0400][gc,heap ] GC(1) Humongous regions: 0->0 +[2023-11-03T15:24:35.392-0400][gc,metaspace] GC(1) Metaspace: 21331K(21504K)->21331K(21504K) NonClass: 18755K(18816K)->18755K(18816K) Class: 2576K(2688K)->2576K(2688K) +[2023-11-03T15:24:35.392-0400][gc ] GC(1) Pause Young (Concurrent Start) (Metadata GC Threshold) 19M->9M(1024M) 5.903ms +[2023-11-03T15:24:35.392-0400][gc,cpu ] GC(1) User=0.03s Sys=0.04s Real=0.01s +[2023-11-03T15:24:35.392-0400][gc ] GC(2) Concurrent Mark Cycle +[2023-11-03T15:24:35.393-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks +[2023-11-03T15:24:35.393-0400][gc,marking ] GC(2) Concurrent Clear Claimed Marks 0.014ms +[2023-11-03T15:24:35.393-0400][gc,marking ] GC(2) Concurrent Scan Root Regions +[2023-11-03T15:24:35.394-0400][gc,marking ] GC(2) Concurrent Scan Root Regions 1.340ms +[2023-11-03T15:24:35.394-0400][gc,marking ] GC(2) Concurrent Mark +[2023-11-03T15:24:35.394-0400][gc,marking ] GC(2) Concurrent Mark From Roots +[2023-11-03T15:24:35.394-0400][gc,task ] GC(2) Using 3 workers of 3 for marking +[2023-11-03T15:24:35.395-0400][gc,marking ] GC(2) Concurrent Mark From Roots 1.062ms +[2023-11-03T15:24:35.395-0400][gc,marking ] GC(2) Concurrent Preclean +[2023-11-03T15:24:35.395-0400][gc,marking ] GC(2) Concurrent Preclean 0.069ms +[2023-11-03T15:24:35.395-0400][gc,start ] GC(2) Pause Remark +[2023-11-03T15:24:35.396-0400][gc ] GC(2) Pause Remark 10M->10M(1024M) 0.939ms +[2023-11-03T15:24:35.396-0400][gc,cpu ] GC(2) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:35.396-0400][gc,marking ] GC(2) Concurrent Mark 2.207ms +[2023-11-03T15:24:35.396-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets +[2023-11-03T15:24:35.397-0400][gc,marking ] GC(2) Concurrent Rebuild Remembered Sets 1.105ms +[2023-11-03T15:24:35.397-0400][gc,start ] GC(2) Pause Cleanup +[2023-11-03T15:24:35.397-0400][gc ] GC(2) Pause Cleanup 10M->10M(1024M) 0.103ms +[2023-11-03T15:24:35.397-0400][gc,cpu ] GC(2) User=0.00s Sys=0.00s Real=0.00s +[2023-11-03T15:24:35.397-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark +[2023-11-03T15:24:35.398-0400][gc,marking ] GC(2) Concurrent Cleanup for Next Mark 1.019ms +[2023-11-03T15:24:35.398-0400][gc ] GC(2) Concurrent Mark Cycle 5.951ms +[2023-11-03T15:24:35.865-0400][gc,start ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:24:35.865-0400][gc,task ] GC(3) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:35.867-0400][gc,phases ] GC(3) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:35.867-0400][gc,phases ] GC(3) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:35.867-0400][gc,phases ] GC(3) Evacuate Collection Set: 1.5ms +[2023-11-03T15:24:35.867-0400][gc,phases ] GC(3) Post Evacuate Collection Set: 0.6ms +[2023-11-03T15:24:35.867-0400][gc,phases ] GC(3) Other: 0.1ms +[2023-11-03T15:24:35.867-0400][gc,heap ] GC(3) Eden regions: 49->0(45) +[2023-11-03T15:24:35.867-0400][gc,heap ] GC(3) Survivor regions: 2->6(7) +[2023-11-03T15:24:35.867-0400][gc,heap ] GC(3) Old regions: 8->8 +[2023-11-03T15:24:35.867-0400][gc,heap ] GC(3) Archive regions: 2->2 +[2023-11-03T15:24:35.867-0400][gc,heap ] GC(3) Humongous regions: 129->129 +[2023-11-03T15:24:35.867-0400][gc,metaspace] GC(3) Metaspace: 30468K(30720K)->30468K(30720K) NonClass: 26982K(27072K)->26982K(27072K) Class: 3485K(3648K)->3485K(3648K) +[2023-11-03T15:24:35.867-0400][gc ] GC(3) Pause Young (Normal) (G1 Evacuation Pause) 187M->142M(1024M) 2.526ms +[2023-11-03T15:24:35.867-0400][gc,cpu ] GC(3) User=0.01s Sys=0.01s Real=0.00s +[2023-11-03T15:24:36.104-0400][gc,heap,exit] Heap +[2023-11-03T15:24:36.104-0400][gc,heap,exit] garbage-first heap total 1048576K, used 155766K [0x00000000c0000000, 0x0000000100000000) +[2023-11-03T15:24:36.104-0400][gc,heap,exit] region size 1024K, 16 young (16384K), 6 survivors (6144K) +[2023-11-03T15:24:36.104-0400][gc,heap,exit] Metaspace used 31300K, committed 31616K, reserved 1081344K +[2023-11-03T15:24:36.104-0400][gc,heap,exit] class space used 3628K, committed 3776K, reserved 1048576K diff --git a/logs/log-cleaner.log b/logs/log-cleaner.log new file mode 100644 index 0000000..2c78cb8 --- /dev/null +++ b/logs/log-cleaner.log @@ -0,0 +1,3 @@ +[2023-11-03 15:24:35,538] INFO Starting the log cleaner (kafka.log.LogCleaner) +[2023-11-03 15:24:35,887] INFO Shutting down the log cleaner. (kafka.log.LogCleaner) +[2023-11-03 15:24:54,108] INFO Starting the log cleaner (kafka.log.LogCleaner) diff --git a/logs/log-cleaner.log.2023-11-03-10 b/logs/log-cleaner.log.2023-11-03-10 new file mode 100644 index 0000000..e69de29 diff --git a/logs/log-cleaner.log.2023-11-03-14 b/logs/log-cleaner.log.2023-11-03-14 new file mode 100644 index 0000000..51bcb55 --- /dev/null +++ b/logs/log-cleaner.log.2023-11-03-14 @@ -0,0 +1,2 @@ +[2023-11-03 14:01:23,012] INFO Starting the log cleaner (kafka.log.LogCleaner) +[2023-11-03 14:29:33,677] INFO Shutting down the log cleaner. (kafka.log.LogCleaner) diff --git a/logs/server.log b/logs/server.log new file mode 100644 index 0000000..162fa42 --- /dev/null +++ b/logs/server.log @@ -0,0 +1,1575 @@ +[2023-11-03 15:23:24,851] INFO Reading configuration from: ./config/server.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:23:24,853] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) +org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing ./config/server.properties + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:198) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:125) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:91) +Caused by: java.lang.IllegalArgumentException: dataDir is not set + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:424) + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:194) + ... 2 more +[2023-11-03 15:23:24,855] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) +[2023-11-03 15:23:24,857] ERROR Exiting JVM with code 2 (org.apache.zookeeper.util.ServiceUtils) +[2023-11-03 15:23:44,574] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) +[2023-11-03 15:23:44,729] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) +[2023-11-03 15:23:44,778] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 15:23:44,779] INFO starting (kafka.server.KafkaServer) +[2023-11-03 15:23:44,779] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) +[2023-11-03 15:23:44,788] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:23:44,790] INFO Client environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,790] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,790] INFO Client environment:java.version=17.0.6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,790] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,790] INFO Client environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,790] INFO Client environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:user.name=memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:user.home=/home/memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.memory.free=987MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,791] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,793] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:23:44,795] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) +[2023-11-03 15:23:44,799] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:44,800] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:23:44,801] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:44,803] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:45,907] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:45,909] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:47,011] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:47,012] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:48,115] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:48,116] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:49,218] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:49,218] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:50,320] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:50,321] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:51,423] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:51,424] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:52,526] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:52,527] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:53,630] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:53,631] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:54,732] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:54,734] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:55,835] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:55,837] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:56,938] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:56,940] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:58,041] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:58,043] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:59,144] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:23:59,146] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:23:59,481] INFO Reading configuration from: ./config/server.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:23:59,483] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) +org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing ./config/server.properties + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:198) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:125) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:91) +Caused by: java.lang.IllegalArgumentException: dataDir is not set + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:424) + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:194) + ... 2 more +[2023-11-03 15:23:59,484] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) +[2023-11-03 15:23:59,486] ERROR Exiting JVM with code 2 (org.apache.zookeeper.util.ServiceUtils) +[2023-11-03 15:24:00,248] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:00,249] WARN Session 0x0 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:24:01,350] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:01,352] WARN Session 0x0 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 15:24:02,187] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 15:24:02,190] INFO shutting down (kafka.server.KafkaServer) +[2023-11-03 15:24:02,203] ERROR Fatal error during KafkaServer shutdown. (kafka.server.KafkaServer) +java.lang.IllegalStateException: Kafka server is still starting up, cannot shut down! + at kafka.server.KafkaServer.shutdown(KafkaServer.scala:884) + at kafka.Kafka$.$anonfun$main$3(Kafka.scala:104) + at kafka.utils.Exit$.$anonfun$addShutdownHook$1(Exit.scala:38) + at java.base/java.lang.Thread.run(Thread.java:833) +[2023-11-03 15:24:02,203] ERROR Halting Kafka. (kafka.Kafka$) +[2023-11-03 15:24:14,438] INFO Reading configuration from: ./config/server.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:14,440] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain) +org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing ./config/server.properties + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:198) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:125) + at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:91) +Caused by: java.lang.IllegalArgumentException: dataDir is not set + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:424) + at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:194) + ... 2 more +[2023-11-03 15:24:14,441] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) +[2023-11-03 15:24:14,443] ERROR Exiting JVM with code 2 (org.apache.zookeeper.util.ServiceUtils) +[2023-11-03 15:24:30,925] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,927] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,927] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,927] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,927] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,928] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 15:24:30,928] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 15:24:30,929] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 15:24:30,929] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) +[2023-11-03 15:24:30,929] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) +[2023-11-03 15:24:30,930] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,930] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,930] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,930] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,930] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 15:24:30,930] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) +[2023-11-03 15:24:30,938] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@4034c28c (org.apache.zookeeper.server.ServerMetrics) +[2023-11-03 15:24:30,939] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 15:24:30,939] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 15:24:30,941] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 15:24:30,947] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,947] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.version=17.0.6 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,948] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:user.name=memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:user.home=/home/memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,949] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,950] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) +[2023-11-03 15:24:30,951] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,951] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,951] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 15:24:30,952] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,952] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 15:24:30,954] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,954] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,954] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 15:24:30,954] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 15:24:30,954] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:30,958] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 15:24:30,958] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 15:24:30,959] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 24 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 15:24:30,963] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 15:24:30,972] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 15:24:30,972] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 15:24:30,973] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 15:24:30,973] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 15:24:30,975] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) +[2023-11-03 15:24:30,975] INFO Reading snapshot /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap) +[2023-11-03 15:24:30,977] INFO The digest value is empty in snapshot (org.apache.zookeeper.server.DataTree) +[2023-11-03 15:24:31,001] INFO 139 txns loaded in 20 ms (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 15:24:31,001] INFO Snapshot loaded in 28 ms, highest zxid is 0x8b, digest is 308644508021 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 15:24:31,002] INFO Snapshotting: 0x8b to /tmp/zookeeper/version-2/snapshot.8b (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 15:24:31,004] INFO Snapshot taken in 2 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:31,010] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) +[2023-11-03 15:24:31,010] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) +[2023-11-03 15:24:31,022] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) +[2023-11-03 15:24:31,022] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) +[2023-11-03 15:24:34,875] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) +[2023-11-03 15:24:35,029] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) +[2023-11-03 15:24:35,077] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 15:24:35,078] INFO starting (kafka.server.KafkaServer) +[2023-11-03 15:24:35,078] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) +[2023-11-03 15:24:35,087] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:35,089] INFO Client environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,089] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,089] INFO Client environment:java.version=17.0.6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,089] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,089] INFO Client environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:user.name=memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:user.home=/home/memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,090] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,092] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:35,094] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) +[2023-11-03 15:24:35,098] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:35,099] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:35,100] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:35,102] INFO Socket connection established, initiating session, client: /[0:0:0:0:0:0:0:1]:51124, server: localhost/[0:0:0:0:0:0:0:1]:2181 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:35,106] INFO Creating new log file: log.8c (org.apache.zookeeper.server.persistence.FileTxnLog) +[2023-11-03 15:24:35,109] INFO Session establishment complete on server localhost/[0:0:0:0:0:0:0:1]:2181, session id = 0x10000517bf90000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:35,111] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:35,255] INFO Cluster ID = YJQPVe6YTAyNHvtG2h9q7w (kafka.server.KafkaServer) +[2023-11-03 15:24:35,292] INFO KafkaConfig values: + advertised.listeners = null + alter.config.policy.class.name = null + alter.log.dirs.replication.quota.window.num = 11 + alter.log.dirs.replication.quota.window.size.seconds = 1 + authorizer.class.name = + auto.create.topics.enable = true + auto.include.jmx.reporter = true + auto.leader.rebalance.enable = true + background.threads = 10 + broker.heartbeat.interval.ms = 2000 + broker.id = 0 + broker.id.generation.enable = true + broker.rack = null + broker.session.timeout.ms = 9000 + client.quota.callback.class = null + compression.type = producer + connection.failed.authentication.delay.ms = 100 + connections.max.idle.ms = 600000 + connections.max.reauth.ms = 0 + control.plane.listener.name = null + controlled.shutdown.enable = true + controlled.shutdown.max.retries = 3 + controlled.shutdown.retry.backoff.ms = 5000 + controller.listener.names = null + controller.quorum.append.linger.ms = 25 + controller.quorum.election.backoff.max.ms = 1000 + controller.quorum.election.timeout.ms = 1000 + controller.quorum.fetch.timeout.ms = 2000 + controller.quorum.request.timeout.ms = 2000 + controller.quorum.retry.backoff.ms = 20 + controller.quorum.voters = [] + controller.quota.window.num = 11 + controller.quota.window.size.seconds = 1 + controller.socket.timeout.ms = 30000 + create.topic.policy.class.name = null + default.replication.factor = 1 + delegation.token.expiry.check.interval.ms = 3600000 + delegation.token.expiry.time.ms = 86400000 + delegation.token.master.key = null + delegation.token.max.lifetime.ms = 604800000 + delegation.token.secret.key = null + delete.records.purgatory.purge.interval.requests = 1 + delete.topic.enable = true + early.start.listeners = null + fetch.max.bytes = 57671680 + fetch.purgatory.purge.interval.requests = 1000 + group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] + group.consumer.heartbeat.interval.ms = 5000 + group.consumer.max.heartbeat.interval.ms = 15000 + group.consumer.max.session.timeout.ms = 60000 + group.consumer.max.size = 2147483647 + group.consumer.min.heartbeat.interval.ms = 5000 + group.consumer.min.session.timeout.ms = 45000 + group.consumer.session.timeout.ms = 45000 + group.coordinator.new.enable = false + group.coordinator.threads = 1 + group.initial.rebalance.delay.ms = 0 + group.max.session.timeout.ms = 1800000 + group.max.size = 2147483647 + group.min.session.timeout.ms = 6000 + initial.broker.registration.timeout.ms = 60000 + inter.broker.listener.name = null + inter.broker.protocol.version = 3.6-IV2 + kafka.metrics.polling.interval.secs = 10 + kafka.metrics.reporters = [] + leader.imbalance.check.interval.seconds = 300 + leader.imbalance.per.broker.percentage = 10 + listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + listeners = PLAINTEXT://:9092 + log.cleaner.backoff.ms = 15000 + log.cleaner.dedupe.buffer.size = 134217728 + log.cleaner.delete.retention.ms = 86400000 + log.cleaner.enable = true + log.cleaner.io.buffer.load.factor = 0.9 + log.cleaner.io.buffer.size = 524288 + log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 + log.cleaner.max.compaction.lag.ms = 9223372036854775807 + log.cleaner.min.cleanable.ratio = 0.5 + log.cleaner.min.compaction.lag.ms = 0 + log.cleaner.threads = 1 + log.cleanup.policy = [delete] + log.dir = /tmp/kafka-logs + log.dirs = /tmp/kafka-logs + log.flush.interval.messages = 9223372036854775807 + log.flush.interval.ms = null + log.flush.offset.checkpoint.interval.ms = 60000 + log.flush.scheduler.interval.ms = 9223372036854775807 + log.flush.start.offset.checkpoint.interval.ms = 60000 + log.index.interval.bytes = 4096 + log.index.size.max.bytes = 10485760 + log.local.retention.bytes = -2 + log.local.retention.ms = -2 + log.message.downconversion.enable = true + log.message.format.version = 3.0-IV1 + log.message.timestamp.after.max.ms = 9223372036854775807 + log.message.timestamp.before.max.ms = 9223372036854775807 + log.message.timestamp.difference.max.ms = 9223372036854775807 + log.message.timestamp.type = CreateTime + log.preallocate = false + log.retention.bytes = -1 + log.retention.check.interval.ms = 300000 + log.retention.hours = 168 + log.retention.minutes = null + log.retention.ms = null + log.roll.hours = 168 + log.roll.jitter.hours = 0 + log.roll.jitter.ms = null + log.roll.ms = null + log.segment.bytes = 1073741824 + log.segment.delete.delay.ms = 60000 + max.connection.creation.rate = 2147483647 + max.connections = 2147483647 + max.connections.per.ip = 2147483647 + max.connections.per.ip.overrides = + max.incremental.fetch.session.cache.slots = 1000 + message.max.bytes = 1048588 + metadata.log.dir = null + metadata.log.max.record.bytes.between.snapshots = 20971520 + metadata.log.max.snapshot.interval.ms = 3600000 + metadata.log.segment.bytes = 1073741824 + metadata.log.segment.min.bytes = 8388608 + metadata.log.segment.ms = 604800000 + metadata.max.idle.interval.ms = 500 + metadata.max.retention.bytes = 104857600 + metadata.max.retention.ms = 604800000 + metric.reporters = [] + metrics.num.samples = 2 + metrics.recording.level = INFO + metrics.sample.window.ms = 30000 + min.insync.replicas = 1 + node.id = 0 + num.io.threads = 8 + num.network.threads = 3 + num.partitions = 1 + num.recovery.threads.per.data.dir = 1 + num.replica.alter.log.dirs.threads = null + num.replica.fetchers = 1 + offset.metadata.max.bytes = 4096 + offsets.commit.required.acks = -1 + offsets.commit.timeout.ms = 5000 + offsets.load.buffer.size = 5242880 + offsets.retention.check.interval.ms = 600000 + offsets.retention.minutes = 10080 + offsets.topic.compression.codec = 0 + offsets.topic.num.partitions = 50 + offsets.topic.replication.factor = 1 + offsets.topic.segment.bytes = 104857600 + password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding + password.encoder.iterations = 4096 + password.encoder.key.length = 128 + password.encoder.keyfactory.algorithm = null + password.encoder.old.secret = null + password.encoder.secret = null + principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder + process.roles = [] + producer.id.expiration.check.interval.ms = 600000 + producer.id.expiration.ms = 86400000 + producer.purgatory.purge.interval.requests = 1000 + queued.max.request.bytes = -1 + queued.max.requests = 500 + quota.window.num = 11 + quota.window.size.seconds = 1 + remote.log.index.file.cache.total.size.bytes = 1073741824 + remote.log.manager.task.interval.ms = 30000 + remote.log.manager.task.retry.backoff.max.ms = 30000 + remote.log.manager.task.retry.backoff.ms = 500 + remote.log.manager.task.retry.jitter = 0.2 + remote.log.manager.thread.pool.size = 10 + remote.log.metadata.custom.metadata.max.bytes = 128 + remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager + remote.log.metadata.manager.class.path = null + remote.log.metadata.manager.impl.prefix = rlmm.config. + remote.log.metadata.manager.listener.name = null + remote.log.reader.max.pending.tasks = 100 + remote.log.reader.threads = 10 + remote.log.storage.manager.class.name = null + remote.log.storage.manager.class.path = null + remote.log.storage.manager.impl.prefix = rsm.config. + remote.log.storage.system.enable = false + replica.fetch.backoff.ms = 1000 + replica.fetch.max.bytes = 1048576 + replica.fetch.min.bytes = 1 + replica.fetch.response.max.bytes = 10485760 + replica.fetch.wait.max.ms = 500 + replica.high.watermark.checkpoint.interval.ms = 5000 + replica.lag.time.max.ms = 30000 + replica.selector.class = null + replica.socket.receive.buffer.bytes = 65536 + replica.socket.timeout.ms = 30000 + replication.quota.window.num = 11 + replication.quota.window.size.seconds = 1 + request.timeout.ms = 30000 + reserved.broker.max.id = 1000 + sasl.client.callback.handler.class = null + sasl.enabled.mechanisms = [GSSAPI] + sasl.jaas.config = null + sasl.kerberos.kinit.cmd = /usr/bin/kinit + sasl.kerberos.min.time.before.relogin = 60000 + sasl.kerberos.principal.to.local.rules = [DEFAULT] + sasl.kerberos.service.name = null + sasl.kerberos.ticket.renew.jitter = 0.05 + sasl.kerberos.ticket.renew.window.factor = 0.8 + sasl.login.callback.handler.class = null + sasl.login.class = null + sasl.login.connect.timeout.ms = null + sasl.login.read.timeout.ms = null + sasl.login.refresh.buffer.seconds = 300 + sasl.login.refresh.min.period.seconds = 60 + sasl.login.refresh.window.factor = 0.8 + sasl.login.refresh.window.jitter = 0.05 + sasl.login.retry.backoff.max.ms = 10000 + sasl.login.retry.backoff.ms = 100 + sasl.mechanism.controller.protocol = GSSAPI + sasl.mechanism.inter.broker.protocol = GSSAPI + sasl.oauthbearer.clock.skew.seconds = 30 + sasl.oauthbearer.expected.audience = null + sasl.oauthbearer.expected.issuer = null + sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 + sasl.oauthbearer.jwks.endpoint.url = null + sasl.oauthbearer.scope.claim.name = scope + sasl.oauthbearer.sub.claim.name = sub + sasl.oauthbearer.token.endpoint.url = null + sasl.server.callback.handler.class = null + sasl.server.max.receive.size = 524288 + security.inter.broker.protocol = PLAINTEXT + security.providers = null + server.max.startup.time.ms = 9223372036854775807 + socket.connection.setup.timeout.max.ms = 30000 + socket.connection.setup.timeout.ms = 10000 + socket.listen.backlog.size = 50 + socket.receive.buffer.bytes = 102400 + socket.request.max.bytes = 104857600 + socket.send.buffer.bytes = 102400 + ssl.cipher.suites = [] + ssl.client.auth = none + ssl.enabled.protocols = [TLSv1.2, TLSv1.3] + ssl.endpoint.identification.algorithm = https + ssl.engine.factory.class = null + ssl.key.password = null + ssl.keymanager.algorithm = SunX509 + ssl.keystore.certificate.chain = null + ssl.keystore.key = null + ssl.keystore.location = null + ssl.keystore.password = null + ssl.keystore.type = JKS + ssl.principal.mapping.rules = DEFAULT + ssl.protocol = TLSv1.3 + ssl.provider = null + ssl.secure.random.implementation = null + ssl.trustmanager.algorithm = PKIX + ssl.truststore.certificates = null + ssl.truststore.location = null + ssl.truststore.password = null + ssl.truststore.type = JKS + transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 + transaction.max.timeout.ms = 900000 + transaction.partition.verification.enable = true + transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 + transaction.state.log.load.buffer.size = 5242880 + transaction.state.log.min.isr = 1 + transaction.state.log.num.partitions = 50 + transaction.state.log.replication.factor = 1 + transaction.state.log.segment.bytes = 104857600 + transactional.id.expiration.ms = 604800000 + unclean.leader.election.enable = false + unstable.api.versions.enable = false + zookeeper.clientCnxnSocket = null + zookeeper.connect = localhost:2181 + zookeeper.connection.timeout.ms = 18000 + zookeeper.max.in.flight.requests = 10 + zookeeper.metadata.migration.enable = false + zookeeper.session.timeout.ms = 18000 + zookeeper.set.acl = false + zookeeper.ssl.cipher.suites = null + zookeeper.ssl.client.enable = false + zookeeper.ssl.crl.enable = false + zookeeper.ssl.enabled.protocols = null + zookeeper.ssl.endpoint.identification.algorithm = HTTPS + zookeeper.ssl.keystore.location = null + zookeeper.ssl.keystore.password = null + zookeeper.ssl.keystore.type = null + zookeeper.ssl.ocsp.enable = false + zookeeper.ssl.protocol = TLSv1.2 + zookeeper.ssl.truststore.location = null + zookeeper.ssl.truststore.password = null + zookeeper.ssl.truststore.type = null + (kafka.server.KafkaConfig) +[2023-11-03 15:24:35,318] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:35,318] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:35,319] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:35,320] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:35,344] INFO Loading logs from log dirs ArraySeq(/tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,352] INFO Skipping recovery of 51 logs from /tmp/kafka-logs since clean shutdown file was found (kafka.log.LogManager) +[2023-11-03 15:24:35,386] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Loading producer state till offset 11 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,393] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Reloading from producer snapshot and rebuilding producer state from offset 11 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,393] INFO [ProducerStateManager partition=__consumer_offsets-28]Loading producer state from snapshot file 'SnapshotFile(offset=11, file=/tmp/kafka-logs/__consumer_offsets-28/00000000000000000011.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 15:24:35,400] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Producer state recovery took 7ms for snapshot load and 0ms for segment recovery from offset 11 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,411] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-28, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=11) with 1 segments, local-log-start-offset 0 and log-end-offset 11 in 55ms (1/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,413] INFO [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,415] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-13, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (2/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,417] INFO [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,418] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-43, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (3/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,420] INFO [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,421] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-6, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (4/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,423] INFO [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,424] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-36, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=36, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (5/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,426] INFO [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,427] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-21, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (6/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,429] INFO [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,430] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-12, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (7/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,432] INFO [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,433] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-42, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=42, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (8/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,435] INFO [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,436] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-27, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 4ms (9/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,438] INFO [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,439] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-20, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (10/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,441] INFO [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,442] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-5, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (11/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,444] INFO [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,444] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-35, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=35, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (12/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,446] INFO [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,447] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-0, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (13/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,448] INFO [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,449] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-30, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=30, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (14/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,451] INFO [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,452] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-15, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=15, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (15/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,453] INFO [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,454] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-45, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (16/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,456] INFO [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,456] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-8, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (17/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,458] INFO [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,459] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-38, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (18/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,460] INFO [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,461] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-23, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (19/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,462] INFO [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,463] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-14, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (20/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,464] INFO [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,465] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-44, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (21/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,466] INFO [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,467] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-29, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=29, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (22/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,469] INFO [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,470] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-22, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (23/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,472] INFO [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,472] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-7, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (24/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,474] INFO [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,474] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-37, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=37, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (25/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,476] INFO [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,476] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-32, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (26/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,478] INFO [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,478] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-17, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (27/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,479] INFO [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,480] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-47, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=47, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (28/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,482] INFO [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,482] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-40, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=40, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (29/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,484] INFO [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,484] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-25, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (30/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,486] INFO [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,486] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-2, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (31/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,487] INFO [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,488] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-16, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (32/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,489] INFO [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,490] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-1, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (33/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,491] INFO [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,492] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-46, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=46, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (34/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,493] INFO [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,494] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-31, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (35/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,495] INFO [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,496] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-24, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (36/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,497] INFO [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,498] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-9, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (37/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,499] INFO [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,500] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-39, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (38/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,501] INFO [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,502] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-49, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (39/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,504] INFO [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,504] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-26, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (40/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,505] INFO [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,506] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-11, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (41/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,508] INFO [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,508] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-4, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (42/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,510] INFO [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,510] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-34, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (43/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,512] INFO [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,512] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-19, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (44/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,513] INFO [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,514] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-48, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=48, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (45/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,515] INFO [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,516] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-33, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=33, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (46/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,517] INFO [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,517] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-10, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (47/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,519] INFO [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,520] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-41, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (48/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,521] INFO [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,522] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-18, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (49/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,523] INFO [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,523] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-3, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (50/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,525] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Loading producer state till offset 10 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,525] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Reloading from producer snapshot and rebuilding producer state from offset 10 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,525] INFO [ProducerStateManager partition=OrderEventQA2-0]Loading producer state from snapshot file 'SnapshotFile(offset=10, file=/tmp/kafka-logs/OrderEventQA2-0/00000000000000000010.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 15:24:35,525] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 10 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:35,525] INFO Completed load of Log(dir=/tmp/kafka-logs/OrderEventQA2-0, topicId=78sflbJnR2GXTOOat5Yo7Q, topic=OrderEventQA2, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=10) with 1 segments, local-log-start-offset 0 and log-end-offset 10 in 2ms (51/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:35,528] INFO Loaded 51 logs in 183ms (kafka.log.LogManager) +[2023-11-03 15:24:35,529] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) +[2023-11-03 15:24:35,529] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) +[2023-11-03 15:24:35,554] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 15:24:35,564] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 15:24:35,576] INFO [MetadataCache brokerId=0] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) +[2023-11-03 15:24:35,592] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,772] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) +[2023-11-03 15:24:35,784] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) +[2023-11-03 15:24:35,788] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,803] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,804] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,805] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,806] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,806] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,815] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 15:24:35,815] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 15:24:35,849] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) +[2023-11-03 15:24:35,862] ERROR Error while creating ephemeral at /brokers/ids/0, node already exists and owner '0x100000330eb0000' does not match current session '0x10000517bf90000' (kafka.zk.KafkaZkClient$CheckedEphemeral) +[2023-11-03 15:24:35,869] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) +org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists + at org.apache.zookeeper.KeeperException.create(KeeperException.java:126) + at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:2189) + at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:2127) + at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:2094) + at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:106) + at kafka.server.KafkaServer.startup(KafkaServer.scala:366) + at kafka.Kafka$.main(Kafka.scala:113) + at kafka.Kafka.main(Kafka.scala) +[2023-11-03 15:24:35,870] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer) +[2023-11-03 15:24:35,871] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Stopping socket server request processors (kafka.network.SocketServer) +[2023-11-03 15:24:35,873] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Stopped socket server request processors (kafka.network.SocketServer) +[2023-11-03 15:24:35,875] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager) +[2023-11-03 15:24:35,876] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 15:24:35,876] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 15:24:35,876] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 15:24:35,876] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager) +[2023-11-03 15:24:35,877] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager) +[2023-11-03 15:24:35,877] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager) +[2023-11-03 15:24:35,877] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager) +[2023-11-03 15:24:35,877] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-RemoteFetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-RemoteFetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-RemoteFetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,878] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,879] INFO [ExpirationReaper-0-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:35,884] INFO [AddPartitionsToTxnSenderThread-0]: Shutting down (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 15:24:35,884] INFO [AddPartitionsToTxnSenderThread-0]: Stopped (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 15:24:35,884] INFO [AddPartitionsToTxnSenderThread-0]: Shutdown completed (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 15:24:35,885] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager) +[2023-11-03 15:24:35,885] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,885] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Stopped (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,885] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,886] INFO Broker to controller channel manager for alter-partition shutdown (kafka.server.BrokerToControllerChannelManagerImpl) +[2023-11-03 15:24:35,886] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,886] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Stopped (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,886] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:35,886] INFO Broker to controller channel manager for forwarding shutdown (kafka.server.BrokerToControllerChannelManagerImpl) +[2023-11-03 15:24:35,887] INFO Shutting down. (kafka.log.LogManager) +[2023-11-03 15:24:35,888] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 15:24:35,888] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 15:24:35,888] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 15:24:35,906] INFO Shutdown complete. (kafka.log.LogManager) +[2023-11-03 15:24:35,908] INFO [feature-zk-node-event-process-thread]: Shutting down (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 15:24:35,908] INFO [feature-zk-node-event-process-thread]: Stopped (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 15:24:35,908] INFO [feature-zk-node-event-process-thread]: Shutdown completed (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 15:24:35,908] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:36,012] INFO Session: 0x10000517bf90000 closed (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:36,012] INFO EventThread shut down for session: 0x10000517bf90000 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:36,014] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:36,014] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,016] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,016] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,016] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-ControllerMutation]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-ControllerMutation]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,017] INFO [ThrottledChannelReaper-ControllerMutation]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:36,019] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Shutting down socket server (kafka.network.SocketServer) +[2023-11-03 15:24:36,085] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Shutdown completed (kafka.network.SocketServer) +[2023-11-03 15:24:36,086] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) +[2023-11-03 15:24:36,086] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) +[2023-11-03 15:24:36,090] INFO Broker and topic stats closed (kafka.server.BrokerTopicStats) +[2023-11-03 15:24:36,098] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 15:24:36,099] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer) +[2023-11-03 15:24:36,099] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$) +org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists + at org.apache.zookeeper.KeeperException.create(KeeperException.java:126) + at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:2189) + at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:2127) + at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:2094) + at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:106) + at kafka.server.KafkaServer.startup(KafkaServer.scala:366) + at kafka.Kafka$.main(Kafka.scala:113) + at kafka.Kafka.main(Kafka.scala) +[2023-11-03 15:24:36,100] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer) +[2023-11-03 15:24:51,853] INFO Expiring session 0x100000330eb0000, timeout of 18000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 15:24:53,478] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) +[2023-11-03 15:24:53,634] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) +[2023-11-03 15:24:53,682] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 15:24:53,683] INFO starting (kafka.server.KafkaServer) +[2023-11-03 15:24:53,683] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) +[2023-11-03 15:24:53,692] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:53,694] INFO Client environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,694] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,694] INFO Client environment:java.version=17.0.6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:user.name=memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:user.home=/home/memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:os.memory.free=986MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,695] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,696] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,697] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3fce8fd9 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 15:24:53,700] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) +[2023-11-03 15:24:53,704] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:53,705] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:53,706] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:53,708] INFO Socket connection established, initiating session, client: /127.0.0.1:35934, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:53,710] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x10000517bf90001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 15:24:53,712] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 15:24:53,843] INFO Cluster ID = YJQPVe6YTAyNHvtG2h9q7w (kafka.server.KafkaServer) +[2023-11-03 15:24:53,877] INFO KafkaConfig values: + advertised.listeners = null + alter.config.policy.class.name = null + alter.log.dirs.replication.quota.window.num = 11 + alter.log.dirs.replication.quota.window.size.seconds = 1 + authorizer.class.name = + auto.create.topics.enable = true + auto.include.jmx.reporter = true + auto.leader.rebalance.enable = true + background.threads = 10 + broker.heartbeat.interval.ms = 2000 + broker.id = 0 + broker.id.generation.enable = true + broker.rack = null + broker.session.timeout.ms = 9000 + client.quota.callback.class = null + compression.type = producer + connection.failed.authentication.delay.ms = 100 + connections.max.idle.ms = 600000 + connections.max.reauth.ms = 0 + control.plane.listener.name = null + controlled.shutdown.enable = true + controlled.shutdown.max.retries = 3 + controlled.shutdown.retry.backoff.ms = 5000 + controller.listener.names = null + controller.quorum.append.linger.ms = 25 + controller.quorum.election.backoff.max.ms = 1000 + controller.quorum.election.timeout.ms = 1000 + controller.quorum.fetch.timeout.ms = 2000 + controller.quorum.request.timeout.ms = 2000 + controller.quorum.retry.backoff.ms = 20 + controller.quorum.voters = [] + controller.quota.window.num = 11 + controller.quota.window.size.seconds = 1 + controller.socket.timeout.ms = 30000 + create.topic.policy.class.name = null + default.replication.factor = 1 + delegation.token.expiry.check.interval.ms = 3600000 + delegation.token.expiry.time.ms = 86400000 + delegation.token.master.key = null + delegation.token.max.lifetime.ms = 604800000 + delegation.token.secret.key = null + delete.records.purgatory.purge.interval.requests = 1 + delete.topic.enable = true + early.start.listeners = null + fetch.max.bytes = 57671680 + fetch.purgatory.purge.interval.requests = 1000 + group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] + group.consumer.heartbeat.interval.ms = 5000 + group.consumer.max.heartbeat.interval.ms = 15000 + group.consumer.max.session.timeout.ms = 60000 + group.consumer.max.size = 2147483647 + group.consumer.min.heartbeat.interval.ms = 5000 + group.consumer.min.session.timeout.ms = 45000 + group.consumer.session.timeout.ms = 45000 + group.coordinator.new.enable = false + group.coordinator.threads = 1 + group.initial.rebalance.delay.ms = 0 + group.max.session.timeout.ms = 1800000 + group.max.size = 2147483647 + group.min.session.timeout.ms = 6000 + initial.broker.registration.timeout.ms = 60000 + inter.broker.listener.name = null + inter.broker.protocol.version = 3.6-IV2 + kafka.metrics.polling.interval.secs = 10 + kafka.metrics.reporters = [] + leader.imbalance.check.interval.seconds = 300 + leader.imbalance.per.broker.percentage = 10 + listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + listeners = PLAINTEXT://:9092 + log.cleaner.backoff.ms = 15000 + log.cleaner.dedupe.buffer.size = 134217728 + log.cleaner.delete.retention.ms = 86400000 + log.cleaner.enable = true + log.cleaner.io.buffer.load.factor = 0.9 + log.cleaner.io.buffer.size = 524288 + log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 + log.cleaner.max.compaction.lag.ms = 9223372036854775807 + log.cleaner.min.cleanable.ratio = 0.5 + log.cleaner.min.compaction.lag.ms = 0 + log.cleaner.threads = 1 + log.cleanup.policy = [delete] + log.dir = /tmp/kafka-logs + log.dirs = /tmp/kafka-logs + log.flush.interval.messages = 9223372036854775807 + log.flush.interval.ms = null + log.flush.offset.checkpoint.interval.ms = 60000 + log.flush.scheduler.interval.ms = 9223372036854775807 + log.flush.start.offset.checkpoint.interval.ms = 60000 + log.index.interval.bytes = 4096 + log.index.size.max.bytes = 10485760 + log.local.retention.bytes = -2 + log.local.retention.ms = -2 + log.message.downconversion.enable = true + log.message.format.version = 3.0-IV1 + log.message.timestamp.after.max.ms = 9223372036854775807 + log.message.timestamp.before.max.ms = 9223372036854775807 + log.message.timestamp.difference.max.ms = 9223372036854775807 + log.message.timestamp.type = CreateTime + log.preallocate = false + log.retention.bytes = -1 + log.retention.check.interval.ms = 300000 + log.retention.hours = 168 + log.retention.minutes = null + log.retention.ms = null + log.roll.hours = 168 + log.roll.jitter.hours = 0 + log.roll.jitter.ms = null + log.roll.ms = null + log.segment.bytes = 1073741824 + log.segment.delete.delay.ms = 60000 + max.connection.creation.rate = 2147483647 + max.connections = 2147483647 + max.connections.per.ip = 2147483647 + max.connections.per.ip.overrides = + max.incremental.fetch.session.cache.slots = 1000 + message.max.bytes = 1048588 + metadata.log.dir = null + metadata.log.max.record.bytes.between.snapshots = 20971520 + metadata.log.max.snapshot.interval.ms = 3600000 + metadata.log.segment.bytes = 1073741824 + metadata.log.segment.min.bytes = 8388608 + metadata.log.segment.ms = 604800000 + metadata.max.idle.interval.ms = 500 + metadata.max.retention.bytes = 104857600 + metadata.max.retention.ms = 604800000 + metric.reporters = [] + metrics.num.samples = 2 + metrics.recording.level = INFO + metrics.sample.window.ms = 30000 + min.insync.replicas = 1 + node.id = 0 + num.io.threads = 8 + num.network.threads = 3 + num.partitions = 1 + num.recovery.threads.per.data.dir = 1 + num.replica.alter.log.dirs.threads = null + num.replica.fetchers = 1 + offset.metadata.max.bytes = 4096 + offsets.commit.required.acks = -1 + offsets.commit.timeout.ms = 5000 + offsets.load.buffer.size = 5242880 + offsets.retention.check.interval.ms = 600000 + offsets.retention.minutes = 10080 + offsets.topic.compression.codec = 0 + offsets.topic.num.partitions = 50 + offsets.topic.replication.factor = 1 + offsets.topic.segment.bytes = 104857600 + password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding + password.encoder.iterations = 4096 + password.encoder.key.length = 128 + password.encoder.keyfactory.algorithm = null + password.encoder.old.secret = null + password.encoder.secret = null + principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder + process.roles = [] + producer.id.expiration.check.interval.ms = 600000 + producer.id.expiration.ms = 86400000 + producer.purgatory.purge.interval.requests = 1000 + queued.max.request.bytes = -1 + queued.max.requests = 500 + quota.window.num = 11 + quota.window.size.seconds = 1 + remote.log.index.file.cache.total.size.bytes = 1073741824 + remote.log.manager.task.interval.ms = 30000 + remote.log.manager.task.retry.backoff.max.ms = 30000 + remote.log.manager.task.retry.backoff.ms = 500 + remote.log.manager.task.retry.jitter = 0.2 + remote.log.manager.thread.pool.size = 10 + remote.log.metadata.custom.metadata.max.bytes = 128 + remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager + remote.log.metadata.manager.class.path = null + remote.log.metadata.manager.impl.prefix = rlmm.config. + remote.log.metadata.manager.listener.name = null + remote.log.reader.max.pending.tasks = 100 + remote.log.reader.threads = 10 + remote.log.storage.manager.class.name = null + remote.log.storage.manager.class.path = null + remote.log.storage.manager.impl.prefix = rsm.config. + remote.log.storage.system.enable = false + replica.fetch.backoff.ms = 1000 + replica.fetch.max.bytes = 1048576 + replica.fetch.min.bytes = 1 + replica.fetch.response.max.bytes = 10485760 + replica.fetch.wait.max.ms = 500 + replica.high.watermark.checkpoint.interval.ms = 5000 + replica.lag.time.max.ms = 30000 + replica.selector.class = null + replica.socket.receive.buffer.bytes = 65536 + replica.socket.timeout.ms = 30000 + replication.quota.window.num = 11 + replication.quota.window.size.seconds = 1 + request.timeout.ms = 30000 + reserved.broker.max.id = 1000 + sasl.client.callback.handler.class = null + sasl.enabled.mechanisms = [GSSAPI] + sasl.jaas.config = null + sasl.kerberos.kinit.cmd = /usr/bin/kinit + sasl.kerberos.min.time.before.relogin = 60000 + sasl.kerberos.principal.to.local.rules = [DEFAULT] + sasl.kerberos.service.name = null + sasl.kerberos.ticket.renew.jitter = 0.05 + sasl.kerberos.ticket.renew.window.factor = 0.8 + sasl.login.callback.handler.class = null + sasl.login.class = null + sasl.login.connect.timeout.ms = null + sasl.login.read.timeout.ms = null + sasl.login.refresh.buffer.seconds = 300 + sasl.login.refresh.min.period.seconds = 60 + sasl.login.refresh.window.factor = 0.8 + sasl.login.refresh.window.jitter = 0.05 + sasl.login.retry.backoff.max.ms = 10000 + sasl.login.retry.backoff.ms = 100 + sasl.mechanism.controller.protocol = GSSAPI + sasl.mechanism.inter.broker.protocol = GSSAPI + sasl.oauthbearer.clock.skew.seconds = 30 + sasl.oauthbearer.expected.audience = null + sasl.oauthbearer.expected.issuer = null + sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 + sasl.oauthbearer.jwks.endpoint.url = null + sasl.oauthbearer.scope.claim.name = scope + sasl.oauthbearer.sub.claim.name = sub + sasl.oauthbearer.token.endpoint.url = null + sasl.server.callback.handler.class = null + sasl.server.max.receive.size = 524288 + security.inter.broker.protocol = PLAINTEXT + security.providers = null + server.max.startup.time.ms = 9223372036854775807 + socket.connection.setup.timeout.max.ms = 30000 + socket.connection.setup.timeout.ms = 10000 + socket.listen.backlog.size = 50 + socket.receive.buffer.bytes = 102400 + socket.request.max.bytes = 104857600 + socket.send.buffer.bytes = 102400 + ssl.cipher.suites = [] + ssl.client.auth = none + ssl.enabled.protocols = [TLSv1.2, TLSv1.3] + ssl.endpoint.identification.algorithm = https + ssl.engine.factory.class = null + ssl.key.password = null + ssl.keymanager.algorithm = SunX509 + ssl.keystore.certificate.chain = null + ssl.keystore.key = null + ssl.keystore.location = null + ssl.keystore.password = null + ssl.keystore.type = JKS + ssl.principal.mapping.rules = DEFAULT + ssl.protocol = TLSv1.3 + ssl.provider = null + ssl.secure.random.implementation = null + ssl.trustmanager.algorithm = PKIX + ssl.truststore.certificates = null + ssl.truststore.location = null + ssl.truststore.password = null + ssl.truststore.type = JKS + transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 + transaction.max.timeout.ms = 900000 + transaction.partition.verification.enable = true + transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 + transaction.state.log.load.buffer.size = 5242880 + transaction.state.log.min.isr = 1 + transaction.state.log.num.partitions = 50 + transaction.state.log.replication.factor = 1 + transaction.state.log.segment.bytes = 104857600 + transactional.id.expiration.ms = 604800000 + unclean.leader.election.enable = false + unstable.api.versions.enable = false + zookeeper.clientCnxnSocket = null + zookeeper.connect = localhost:2181 + zookeeper.connection.timeout.ms = 18000 + zookeeper.max.in.flight.requests = 10 + zookeeper.metadata.migration.enable = false + zookeeper.session.timeout.ms = 18000 + zookeeper.set.acl = false + zookeeper.ssl.cipher.suites = null + zookeeper.ssl.client.enable = false + zookeeper.ssl.crl.enable = false + zookeeper.ssl.enabled.protocols = null + zookeeper.ssl.endpoint.identification.algorithm = HTTPS + zookeeper.ssl.keystore.location = null + zookeeper.ssl.keystore.password = null + zookeeper.ssl.keystore.type = null + zookeeper.ssl.ocsp.enable = false + zookeeper.ssl.protocol = TLSv1.2 + zookeeper.ssl.truststore.location = null + zookeeper.ssl.truststore.password = null + zookeeper.ssl.truststore.type = null + (kafka.server.KafkaConfig) +[2023-11-03 15:24:53,900] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:53,900] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:53,900] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:53,902] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 15:24:53,923] INFO Loading logs from log dirs ArraySeq(/tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:53,931] INFO Skipping recovery of 51 logs from /tmp/kafka-logs since clean shutdown file was found (kafka.log.LogManager) +[2023-11-03 15:24:53,964] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Loading producer state till offset 11 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,965] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Reloading from producer snapshot and rebuilding producer state from offset 11 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,965] INFO [ProducerStateManager partition=__consumer_offsets-28]Loading producer state from snapshot file 'SnapshotFile(offset=11, file=/tmp/kafka-logs/__consumer_offsets-28/00000000000000000011.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 15:24:53,977] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Producer state recovery took 12ms for snapshot load and 0ms for segment recovery from offset 11 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,990] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-28, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=28, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=11) with 1 segments, local-log-start-offset 0 and log-end-offset 11 in 54ms (1/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:53,991] INFO [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,993] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-13, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=13, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 4ms (2/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:53,995] INFO [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,996] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-43, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=43, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (3/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:53,998] INFO [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:53,999] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-6, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=6, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (4/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,001] INFO [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,002] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-36, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=36, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (5/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,003] INFO [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,005] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-21, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=21, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (6/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,006] INFO [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,007] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-12, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=12, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (7/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,009] INFO [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,010] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-42, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=42, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (8/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,011] INFO [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,012] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-27, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=27, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (9/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,013] INFO [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,014] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-20, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=20, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (10/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,015] INFO [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,016] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-5, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=5, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (11/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,017] INFO [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,018] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-35, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=35, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (12/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,019] INFO [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,020] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-0, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (13/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,021] INFO [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,022] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-30, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=30, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (14/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,023] INFO [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,024] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-15, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=15, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (15/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,025] INFO [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,026] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-45, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=45, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (16/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,028] INFO [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,029] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-8, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=8, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (17/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,030] INFO [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,030] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-38, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=38, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (18/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,032] INFO [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,033] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-23, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=23, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (19/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,034] INFO [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,034] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-14, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=14, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (20/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,036] INFO [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,036] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-44, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=44, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (21/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,038] INFO [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,039] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-29, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=29, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (22/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,040] INFO [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,041] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-22, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=22, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (23/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,042] INFO [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,043] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-7, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=7, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (24/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,044] INFO [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,045] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-37, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=37, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (25/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,046] INFO [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,047] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-32, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=32, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (26/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,048] INFO [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,048] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-17, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=17, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (27/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,050] INFO [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,050] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-47, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=47, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (28/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,051] INFO [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,052] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-40, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=40, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (29/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,053] INFO [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,054] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-25, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=25, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (30/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,055] INFO [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,056] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-2, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=2, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (31/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,057] INFO [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,058] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-16, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=16, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (32/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,059] INFO [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,059] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-1, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=1, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (33/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,060] INFO [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,061] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-46, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=46, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (34/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,062] INFO [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,063] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-31, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=31, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (35/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,064] INFO [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,064] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-24, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=24, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (36/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,066] INFO [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,066] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-9, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=9, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (37/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,067] INFO [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,068] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-39, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=39, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (38/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,069] INFO [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,070] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-49, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=49, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (39/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,071] INFO [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,072] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-26, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=26, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (40/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,073] INFO [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,074] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-11, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=11, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (41/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,075] INFO [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,076] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-4, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=4, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (42/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,077] INFO [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,078] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-34, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=34, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (43/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,079] INFO [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,080] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-19, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=19, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (44/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,081] INFO [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,081] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-48, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=48, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (45/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,082] INFO [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,083] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-33, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=33, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 1ms (46/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,084] INFO [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,085] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-10, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=10, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (47/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,086] INFO [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,087] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-41, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=41, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (48/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,088] INFO [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,089] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-18, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=18, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 2ms (49/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,091] INFO [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,092] INFO Completed load of Log(dir=/tmp/kafka-logs/__consumer_offsets-3, topicId=PIKFaFrMTbm2cK6klZ1I7A, topic=__consumer_offsets, partition=3, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=0) with 1 segments, local-log-start-offset 0 and log-end-offset 0 in 3ms (50/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,094] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Loading producer state till offset 10 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,094] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Reloading from producer snapshot and rebuilding producer state from offset 10 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,094] INFO [ProducerStateManager partition=OrderEventQA2-0]Loading producer state from snapshot file 'SnapshotFile(offset=10, file=/tmp/kafka-logs/OrderEventQA2-0/00000000000000000010.snapshot)' (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 15:24:54,094] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 10 (kafka.log.UnifiedLog$) +[2023-11-03 15:24:54,094] INFO Completed load of Log(dir=/tmp/kafka-logs/OrderEventQA2-0, topicId=78sflbJnR2GXTOOat5Yo7Q, topic=OrderEventQA2, partition=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, logEndOffset=10) with 1 segments, local-log-start-offset 0 and log-end-offset 10 in 3ms (51/51 completed in /tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 15:24:54,097] INFO Loaded 51 logs in 173ms (kafka.log.LogManager) +[2023-11-03 15:24:54,099] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) +[2023-11-03 15:24:54,099] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) +[2023-11-03 15:24:54,125] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 15:24:54,134] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 15:24:54,145] INFO [MetadataCache brokerId=0] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) +[2023-11-03 15:24:54,159] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:54,320] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) +[2023-11-03 15:24:54,330] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) +[2023-11-03 15:24:54,333] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:54,347] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,348] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,348] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,349] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,349] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,357] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 15:24:54,357] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 15:24:54,383] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) +[2023-11-03 15:24:54,393] INFO Stat of the created znode at /brokers/ids/0 is: 173,173,1699039494389,1699039494389,1,0,0,72057944010194945,202,0,173 + (kafka.zk.KafkaZkClient) +[2023-11-03 15:24:54,394] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 173 (kafka.zk.KafkaZkClient) +[2023-11-03 15:24:54,427] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,433] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,433] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,444] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,453] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,464] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 15:24:54,466] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) +[2023-11-03 15:24:54,466] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 15:24:54,494] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 15:24:54,510] INFO [Controller id=0, targetBrokerId=0] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 15:24:54,511] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 15:24:54,511] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) +[2023-11-03 15:24:54,513] INFO [Controller id=0, targetBrokerId=0] Client requested connection close from node 0 (org.apache.kafka.clients.NetworkClient) +[2023-11-03 15:24:54,528] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer) +[2023-11-03 15:24:54,531] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) +[2023-11-03 15:24:54,536] INFO Kafka version: 3.6.0 (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 15:24:54,536] INFO Kafka commitId: 60e845626d8a465a (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 15:24:54,536] INFO Kafka startTimeMs: 1699039494533 (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 15:24:54,537] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) +[2023-11-03 15:24:54,663] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:54,691] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, OrderEventQA2-0, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) +[2023-11-03 15:24:54,698] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,702] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,702] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,703] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,703] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,703] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,704] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,704] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,705] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,706] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,706] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,707] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,707] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,707] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,708] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,708] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,709] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,709] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,709] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,710] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,710] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,711] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,711] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,712] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,712] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,713] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,713] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,714] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,714] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,714] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,715] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,715] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,716] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,716] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,717] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,717] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,718] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,718] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,719] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,719] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,719] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,720] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,720] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,721] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,722] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,722] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,722] INFO [Partition OrderEventQA2-0 broker=0] Log loaded for partition OrderEventQA2-0 with initial high watermark 10 (kafka.cluster.Partition) +[2023-11-03 15:24:54,723] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,723] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,724] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:24:54,724] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 11 (kafka.cluster.Partition) +[2023-11-03 15:24:54,729] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,730] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,731] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,731] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,732] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,732] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,733] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,733] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,734] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,734] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,735] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,735] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,735] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,735] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,735] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,735] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,735] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,735] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,737] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,737] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,737] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,737] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,737] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,738] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,738] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,738] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,738] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,741] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,744] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,744] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,744] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,744] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:24:54,760] INFO Loaded member MemberMetadata(memberId=rdkafka-647c1d45-7ee9-44f3-bb43-da525071d1e7, groupInstanceId=None, clientId=rdkafka, clientHost=/0:0:0:0:0:0:0:1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 1. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,765] INFO Loaded member MemberMetadata(memberId=rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359, groupInstanceId=None, clientId=rdkafka, clientHost=/0:0:0:0:0:0:0:1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 2. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,765] INFO Loaded member MemberMetadata(memberId=rdkafka-776e9df4-00c7-4e7d-a711-16ab5fe56213, groupInstanceId=None, clientId=rdkafka, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 2. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,765] INFO Loaded member MemberMetadata(memberId=rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359, groupInstanceId=None, clientId=rdkafka, clientHost=/0:0:0:0:0:0:0:1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 3. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,766] INFO Loaded member MemberMetadata(memberId=rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359, groupInstanceId=None, clientId=rdkafka, clientHost=/0:0:0:0:0:0:0:1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 4. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,766] INFO Loaded member MemberMetadata(memberId=rdkafka-1f0745b4-626b-4168-83dc-7ef55344a53e, groupInstanceId=None, clientId=rdkafka, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 4. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,766] INFO Loaded member MemberMetadata(memberId=rdkafka-34655fad-c217-4fa4-9727-c9c5d201e5b3, groupInstanceId=None, clientId=rdkafka, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 7. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,766] INFO Loaded member MemberMetadata(memberId=rdkafka-3a637c2c-a2de-44db-acf1-39f46b019eae, groupInstanceId=None, clientId=rdkafka, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range)) in group ConsumerGroup01 with generation 7. (kafka.coordinator.group.GroupMetadata$) +[2023-11-03 15:24:54,768] INFO [GroupCoordinator 0]: Loading group metadata for ConsumerGroup01 with generation 7 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:24:54,770] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 35 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 15:25:39,776] INFO [GroupCoordinator 0]: Member rdkafka-34655fad-c217-4fa4-9727-c9c5d201e5b3 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:25:39,783] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 7 (__consumer_offsets-28) (reason: removing member rdkafka-34655fad-c217-4fa4-9727-c9c5d201e5b3 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:25:39,786] INFO [GroupCoordinator 0]: Member rdkafka-3a637c2c-a2de-44db-acf1-39f46b019eae in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:25:39,790] INFO [GroupCoordinator 0]: Group ConsumerGroup01 with generation 8 is now empty (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:36:27,509] INFO Creating topic test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient) +[2023-11-03 15:36:27,577] INFO [Controller id=0, targetBrokerId=0] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 15:36:27,583] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(test-topic-0) (kafka.server.ReplicaFetcherManager) +[2023-11-03 15:36:27,594] INFO [LogLoader partition=test-topic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 15:36:27,598] INFO Created log for partition test-topic-0 in /tmp/kafka-logs/test-topic-0 with properties {} (kafka.log.LogManager) +[2023-11-03 15:36:27,601] INFO [Partition test-topic-0 broker=0] No checkpointed highwatermark is found for partition test-topic-0 (kafka.cluster.Partition) +[2023-11-03 15:36:27,601] INFO [Partition test-topic-0 broker=0] Log loaded for partition test-topic-0 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 15:36:53,741] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group test-group in Empty state. Created a new member id rdkafka-b7353e68-ba09-453d-99b4-345686fd85ef and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:36:53,744] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 0 (__consumer_offsets-12) (reason: Adding new member rdkafka-b7353e68-ba09-453d-99b4-345686fd85ef with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:36:53,747] INFO [GroupCoordinator 0]: Stabilized group test-group generation 1 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:36:53,752] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-b7353e68-ba09-453d-99b4-345686fd85ef for group test-group for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:37:38,759] INFO [GroupCoordinator 0]: Member rdkafka-b7353e68-ba09-453d-99b4-345686fd85ef in group test-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:37:38,760] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 1 (__consumer_offsets-12) (reason: removing member rdkafka-b7353e68-ba09-453d-99b4-345686fd85ef on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:37:38,760] INFO [GroupCoordinator 0]: Group test-group with generation 2 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:39:20,388] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group test-group in Empty state. Created a new member id rdkafka-d9567fa0-b38b-40e6-abcd-8b6f7889ebee and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:39:20,389] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 2 (__consumer_offsets-12) (reason: Adding new member rdkafka-d9567fa0-b38b-40e6-abcd-8b6f7889ebee with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:39:20,389] INFO [GroupCoordinator 0]: Stabilized group test-group generation 3 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:39:20,391] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-d9567fa0-b38b-40e6-abcd-8b6f7889ebee for group test-group for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:40:05,392] INFO [GroupCoordinator 0]: Member rdkafka-d9567fa0-b38b-40e6-abcd-8b6f7889ebee in group test-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:40:05,393] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 3 (__consumer_offsets-12) (reason: removing member rdkafka-d9567fa0-b38b-40e6-abcd-8b6f7889ebee on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:40:05,393] INFO [GroupCoordinator 0]: Group test-group with generation 4 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:41:52,430] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group test-group in Empty state. Created a new member id rdkafka-210b8db0-bf36-41b0-affe-10930b4c650c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:41:52,431] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 4 (__consumer_offsets-12) (reason: Adding new member rdkafka-210b8db0-bf36-41b0-affe-10930b4c650c with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:41:52,431] INFO [GroupCoordinator 0]: Stabilized group test-group generation 5 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:41:52,432] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-210b8db0-bf36-41b0-affe-10930b4c650c for group test-group for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:13,452] INFO [GroupCoordinator 0]: Member rdkafka-210b8db0-bf36-41b0-affe-10930b4c650c in group test-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:13,453] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 5 (__consumer_offsets-12) (reason: removing member rdkafka-210b8db0-bf36-41b0-affe-10930b4c650c on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:13,453] INFO [GroupCoordinator 0]: Group test-group with generation 6 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:48,776] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group test-group in Empty state. Created a new member id rdkafka-5e504db3-70cf-4a03-9cab-d2ce5f173b38 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:48,777] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 6 (__consumer_offsets-12) (reason: Adding new member rdkafka-5e504db3-70cf-4a03-9cab-d2ce5f173b38 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:48,777] INFO [GroupCoordinator 0]: Stabilized group test-group generation 7 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:48,778] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-5e504db3-70cf-4a03-9cab-d2ce5f173b38 for group test-group for generation 7. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:58,679] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 7 (__consumer_offsets-12) (reason: Removing member rdkafka-5e504db3-70cf-4a03-9cab-d2ce5f173b38 on LeaveGroup; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:58,679] INFO [GroupCoordinator 0]: Group test-group with generation 8 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:43:58,682] INFO [GroupCoordinator 0]: Member MemberMetadata(memberId=rdkafka-5e504db3-70cf-4a03-9cab-d2ce5f173b38, groupInstanceId=None, clientId=rdkafka, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, roundrobin)) has left group test-group through explicit `LeaveGroup`; client reason: not provided (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:44:21,236] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group test-group in Empty state. Created a new member id rdkafka-3940170e-44b4-4bcd-8914-b09343b6fc63 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:44:21,237] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 8 (__consumer_offsets-12) (reason: Adding new member rdkafka-3940170e-44b4-4bcd-8914-b09343b6fc63 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:44:21,237] INFO [GroupCoordinator 0]: Stabilized group test-group generation 9 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:44:21,238] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-3940170e-44b4-4bcd-8914-b09343b6fc63 for group test-group for generation 9. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:45:48,259] INFO [GroupCoordinator 0]: Member rdkafka-3940170e-44b4-4bcd-8914-b09343b6fc63 in group test-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:45:48,259] INFO [GroupCoordinator 0]: Preparing to rebalance group test-group in state PreparingRebalance with old generation 9 (__consumer_offsets-12) (reason: removing member rdkafka-3940170e-44b4-4bcd-8914-b09343b6fc63 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 15:45:48,259] INFO [GroupCoordinator 0]: Group test-group with generation 10 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator) diff --git a/logs/server.log.2023-11-03-10 b/logs/server.log.2023-11-03-10 new file mode 100644 index 0000000..b3e8b0a --- /dev/null +++ b/logs/server.log.2023-11-03-10 @@ -0,0 +1,88 @@ +[2023-11-03 10:39:05,838] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,841] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,841] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,841] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,841] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,843] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 10:39:05,843] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 10:39:05,843] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 10:39:05,843] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) +[2023-11-03 10:39:05,844] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) +[2023-11-03 10:39:05,845] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,845] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,845] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,845] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,845] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 10:39:05,845] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) +[2023-11-03 10:39:05,854] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@4034c28c (org.apache.zookeeper.server.ServerMetrics) +[2023-11-03 10:39:05,856] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 10:39:05,856] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 10:39:05,858] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 10:39:05,864] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,865] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,866] INFO Server environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,866] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,866] INFO Server environment:java.version=17.0.6 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,867] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,867] INFO Server environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,867] INFO Server environment:java.class.path=/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/home/memartel/Downloads/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:user.name=memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:user.home=/home/memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:user.dir=/home/memartel/Downloads/kafka_2.13-3.6.0 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,868] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,869] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,870] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) +[2023-11-03 10:39:05,871] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,871] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,872] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 10:39:05,872] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,873] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 10:39:05,875] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,875] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,876] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 10:39:05,876] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 10:39:05,876] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,880] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 10:39:05,881] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 10:39:05,882] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 24 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 10:39:05,886] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 10:39:05,895] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 10:39:05,896] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 10:39:05,896] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 10:39:05,896] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 10:39:05,900] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) +[2023-11-03 10:39:05,900] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 10:39:05,902] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 10:39:05,903] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 10:39:05,903] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 10:39:05,910] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) +[2023-11-03 10:39:05,910] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) +[2023-11-03 10:39:05,921] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) +[2023-11-03 10:39:05,922] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) diff --git a/logs/server.log.2023-11-03-11 b/logs/server.log.2023-11-03-11 new file mode 100644 index 0000000..c046444 --- /dev/null +++ b/logs/server.log.2023-11-03-11 @@ -0,0 +1,89 @@ +[2023-11-03 11:51:41,982] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,984] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,984] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,984] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,984] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,986] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 11:51:41,986] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 11:51:41,986] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) +[2023-11-03 11:51:41,986] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) +[2023-11-03 11:51:41,987] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) +[2023-11-03 11:51:41,987] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,987] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,987] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,987] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,987] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) +[2023-11-03 11:51:41,987] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) +[2023-11-03 11:51:41,996] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@e50a6f6 (org.apache.zookeeper.server.ServerMetrics) +[2023-11-03 11:51:41,998] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 11:51:41,998] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) +[2023-11-03 11:51:42,000] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 11:51:42,006] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,006] INFO (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:java.version=17.0.6 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,008] INFO Server environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,009] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,009] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,009] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:user.name=memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:user.home=/home/memartel (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.memory.free=494MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,010] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,011] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,011] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,011] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,012] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) +[2023-11-03 11:51:42,013] INFO minSessionTimeout set to 6000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,013] INFO maxSessionTimeout set to 60000 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,014] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 11:51:42,014] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,015] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) +[2023-11-03 11:51:42,018] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,018] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,018] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 11:51:42,018] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) +[2023-11-03 11:51:42,019] INFO Created server with tickTime 3000 ms minSessionTimeout 6000 ms maxSessionTimeout 60000 ms clientPortListenBacklog -1 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,023] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 11:51:42,024] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) +[2023-11-03 11:51:42,026] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 24 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 11:51:42,030] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) +[2023-11-03 11:51:42,039] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 11:51:42,040] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) +[2023-11-03 11:51:42,040] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 11:51:42,040] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 11:51:42,042] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) +[2023-11-03 11:51:42,043] INFO Reading snapshot /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileSnap) +[2023-11-03 11:51:42,045] INFO The digest value is empty in snapshot (org.apache.zookeeper.server.DataTree) +[2023-11-03 11:51:42,048] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) +[2023-11-03 11:51:42,049] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) +[2023-11-03 11:51:42,049] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) +[2023-11-03 11:51:42,056] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) +[2023-11-03 11:51:42,056] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) +[2023-11-03 11:51:42,068] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) +[2023-11-03 11:51:42,069] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) diff --git a/logs/server.log.2023-11-03-13 b/logs/server.log.2023-11-03-13 new file mode 100644 index 0000000..af9c356 --- /dev/null +++ b/logs/server.log.2023-11-03-13 @@ -0,0 +1,887 @@ +[2023-11-03 14:01:22,467] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) +[2023-11-03 14:01:22,654] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) +[2023-11-03 14:01:22,710] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 14:01:22,711] INFO starting (kafka.server.KafkaServer) +[2023-11-03 14:01:22,711] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) +[2023-11-03 14:01:22,720] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 14:01:22,722] INFO Client environment:zookeeper.version=3.8.2-139d619b58292d7734b4fc83a0f44be4e7b0c986, built on 2023-07-05 19:24 UTC (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,722] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,722] INFO Client environment:java.version=17.0.6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,723] INFO Client environment:java.vendor=Eclipse Adoptium (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,723] INFO Client environment:java.home=/opt/openjdk-bin-17.0.6_p10 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,723] INFO Client environment:java.class.path=/scratch/kafka_2.13-3.6.0/bin/../libs/activation-1.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/argparse4j-0.7.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/audience-annotations-0.12.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/caffeine-2.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/checker-qual-3.19.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-beanutils-1.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-cli-1.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-collections-3.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-digester-2.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-io-2.11.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-lang3-3.8.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-logging-1.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/commons-validator-1.7.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-basic-auth-extension-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-json-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-mirror-client-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-runtime-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/connect-transforms-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/error_prone_annotations-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-api-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-locator-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/hk2-utils-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-core-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-databind-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-dataformat-csv-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-datatype-jdk8-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-base-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-jaxrs-json-provider-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-jaxb-annotations-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jackson-module-scala_2.13-2.13.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.activation-api-1.2.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.inject-2.6.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jakarta.xml.bind-api-2.3.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javassist-3.29.2-GA.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.activation-api-1.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.annotation-api-1.3.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.servlet-api-3.1.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jaxb-api-2.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-client-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-common-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-container-servlet-core-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-hk2-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jersey-server-2.39.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-client-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-continuation-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-http-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-io-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-security-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-server-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlet-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-servlets-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jetty-util-ajax-9.4.52.v20230823.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jline-3.22.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jopt-simple-5.0.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jose4j-0.9.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/jsr305-3.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-clients-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-group-coordinator-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-log4j-appender-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-metadata-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-raft-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-server-common-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-shell-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-storage-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-examples-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-scala_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-streams-test-utils-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka-tools-api-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/kafka_2.13-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/lz4-java-1.8.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/maven-artifact-3.8.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-2.2.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/metrics-core-4.1.12.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-buffer-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-codec-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-handler-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-resolver-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-classes-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-epoll-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/netty-transport-native-unix-common-4.1.94.Final.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/paranamer-2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/pcollections-4.0.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/plexus-utils-3.3.1.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reflections-0.10.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/reload4j-1.2.25.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/rocksdbjni-7.9.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-collection-compat_2.13-2.10.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-java8-compat_2.13-1.0.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-library-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-logging_2.13-3.9.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/scala-reflect-2.13.11.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-api-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/slf4j-reload4j-1.7.36.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/snappy-java-1.1.10.4.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/swagger-annotations-2.2.8.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/trogdor-3.6.0.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zookeeper-jute-3.8.2.jar:/scratch/kafka_2.13-3.6.0/bin/../libs/zstd-jni-1.5.5-1.jar (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,723] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,723] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.version=6.4.3-cachyosGentooThinkPadP53 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:user.name=memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:user.home=/home/memartel (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:user.dir=/scratch/kafka_2.13-3.6.0 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.memory.free=987MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,724] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,725] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@609bcfb6 (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:01:22,728] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) +[2023-11-03 14:01:22,733] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:01:22,734] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 14:01:22,735] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:01:22,737] INFO Socket connection established, initiating session, client: /[0:0:0:0:0:0:0:1]:36034, server: localhost/[0:0:0:0:0:0:0:1]:2181 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:01:22,748] INFO Session establishment complete on server localhost/[0:0:0:0:0:0:0:1]:2181, session id = 0x100000330eb0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:01:22,750] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 14:01:22,914] INFO Cluster ID = YJQPVe6YTAyNHvtG2h9q7w (kafka.server.KafkaServer) +[2023-11-03 14:01:22,924] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) +[2023-11-03 14:01:22,953] INFO KafkaConfig values: + advertised.listeners = null + alter.config.policy.class.name = null + alter.log.dirs.replication.quota.window.num = 11 + alter.log.dirs.replication.quota.window.size.seconds = 1 + authorizer.class.name = + auto.create.topics.enable = true + auto.include.jmx.reporter = true + auto.leader.rebalance.enable = true + background.threads = 10 + broker.heartbeat.interval.ms = 2000 + broker.id = 0 + broker.id.generation.enable = true + broker.rack = null + broker.session.timeout.ms = 9000 + client.quota.callback.class = null + compression.type = producer + connection.failed.authentication.delay.ms = 100 + connections.max.idle.ms = 600000 + connections.max.reauth.ms = 0 + control.plane.listener.name = null + controlled.shutdown.enable = true + controlled.shutdown.max.retries = 3 + controlled.shutdown.retry.backoff.ms = 5000 + controller.listener.names = null + controller.quorum.append.linger.ms = 25 + controller.quorum.election.backoff.max.ms = 1000 + controller.quorum.election.timeout.ms = 1000 + controller.quorum.fetch.timeout.ms = 2000 + controller.quorum.request.timeout.ms = 2000 + controller.quorum.retry.backoff.ms = 20 + controller.quorum.voters = [] + controller.quota.window.num = 11 + controller.quota.window.size.seconds = 1 + controller.socket.timeout.ms = 30000 + create.topic.policy.class.name = null + default.replication.factor = 1 + delegation.token.expiry.check.interval.ms = 3600000 + delegation.token.expiry.time.ms = 86400000 + delegation.token.master.key = null + delegation.token.max.lifetime.ms = 604800000 + delegation.token.secret.key = null + delete.records.purgatory.purge.interval.requests = 1 + delete.topic.enable = true + early.start.listeners = null + fetch.max.bytes = 57671680 + fetch.purgatory.purge.interval.requests = 1000 + group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] + group.consumer.heartbeat.interval.ms = 5000 + group.consumer.max.heartbeat.interval.ms = 15000 + group.consumer.max.session.timeout.ms = 60000 + group.consumer.max.size = 2147483647 + group.consumer.min.heartbeat.interval.ms = 5000 + group.consumer.min.session.timeout.ms = 45000 + group.consumer.session.timeout.ms = 45000 + group.coordinator.new.enable = false + group.coordinator.threads = 1 + group.initial.rebalance.delay.ms = 0 + group.max.session.timeout.ms = 1800000 + group.max.size = 2147483647 + group.min.session.timeout.ms = 6000 + initial.broker.registration.timeout.ms = 60000 + inter.broker.listener.name = null + inter.broker.protocol.version = 3.6-IV2 + kafka.metrics.polling.interval.secs = 10 + kafka.metrics.reporters = [] + leader.imbalance.check.interval.seconds = 300 + leader.imbalance.per.broker.percentage = 10 + listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL + listeners = PLAINTEXT://:9092 + log.cleaner.backoff.ms = 15000 + log.cleaner.dedupe.buffer.size = 134217728 + log.cleaner.delete.retention.ms = 86400000 + log.cleaner.enable = true + log.cleaner.io.buffer.load.factor = 0.9 + log.cleaner.io.buffer.size = 524288 + log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 + log.cleaner.max.compaction.lag.ms = 9223372036854775807 + log.cleaner.min.cleanable.ratio = 0.5 + log.cleaner.min.compaction.lag.ms = 0 + log.cleaner.threads = 1 + log.cleanup.policy = [delete] + log.dir = /tmp/kafka-logs + log.dirs = /tmp/kafka-logs + log.flush.interval.messages = 9223372036854775807 + log.flush.interval.ms = null + log.flush.offset.checkpoint.interval.ms = 60000 + log.flush.scheduler.interval.ms = 9223372036854775807 + log.flush.start.offset.checkpoint.interval.ms = 60000 + log.index.interval.bytes = 4096 + log.index.size.max.bytes = 10485760 + log.local.retention.bytes = -2 + log.local.retention.ms = -2 + log.message.downconversion.enable = true + log.message.format.version = 3.0-IV1 + log.message.timestamp.after.max.ms = 9223372036854775807 + log.message.timestamp.before.max.ms = 9223372036854775807 + log.message.timestamp.difference.max.ms = 9223372036854775807 + log.message.timestamp.type = CreateTime + log.preallocate = false + log.retention.bytes = -1 + log.retention.check.interval.ms = 300000 + log.retention.hours = 168 + log.retention.minutes = null + log.retention.ms = null + log.roll.hours = 168 + log.roll.jitter.hours = 0 + log.roll.jitter.ms = null + log.roll.ms = null + log.segment.bytes = 1073741824 + log.segment.delete.delay.ms = 60000 + max.connection.creation.rate = 2147483647 + max.connections = 2147483647 + max.connections.per.ip = 2147483647 + max.connections.per.ip.overrides = + max.incremental.fetch.session.cache.slots = 1000 + message.max.bytes = 1048588 + metadata.log.dir = null + metadata.log.max.record.bytes.between.snapshots = 20971520 + metadata.log.max.snapshot.interval.ms = 3600000 + metadata.log.segment.bytes = 1073741824 + metadata.log.segment.min.bytes = 8388608 + metadata.log.segment.ms = 604800000 + metadata.max.idle.interval.ms = 500 + metadata.max.retention.bytes = 104857600 + metadata.max.retention.ms = 604800000 + metric.reporters = [] + metrics.num.samples = 2 + metrics.recording.level = INFO + metrics.sample.window.ms = 30000 + min.insync.replicas = 1 + node.id = 0 + num.io.threads = 8 + num.network.threads = 3 + num.partitions = 1 + num.recovery.threads.per.data.dir = 1 + num.replica.alter.log.dirs.threads = null + num.replica.fetchers = 1 + offset.metadata.max.bytes = 4096 + offsets.commit.required.acks = -1 + offsets.commit.timeout.ms = 5000 + offsets.load.buffer.size = 5242880 + offsets.retention.check.interval.ms = 600000 + offsets.retention.minutes = 10080 + offsets.topic.compression.codec = 0 + offsets.topic.num.partitions = 50 + offsets.topic.replication.factor = 1 + offsets.topic.segment.bytes = 104857600 + password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding + password.encoder.iterations = 4096 + password.encoder.key.length = 128 + password.encoder.keyfactory.algorithm = null + password.encoder.old.secret = null + password.encoder.secret = null + principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder + process.roles = [] + producer.id.expiration.check.interval.ms = 600000 + producer.id.expiration.ms = 86400000 + producer.purgatory.purge.interval.requests = 1000 + queued.max.request.bytes = -1 + queued.max.requests = 500 + quota.window.num = 11 + quota.window.size.seconds = 1 + remote.log.index.file.cache.total.size.bytes = 1073741824 + remote.log.manager.task.interval.ms = 30000 + remote.log.manager.task.retry.backoff.max.ms = 30000 + remote.log.manager.task.retry.backoff.ms = 500 + remote.log.manager.task.retry.jitter = 0.2 + remote.log.manager.thread.pool.size = 10 + remote.log.metadata.custom.metadata.max.bytes = 128 + remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager + remote.log.metadata.manager.class.path = null + remote.log.metadata.manager.impl.prefix = rlmm.config. + remote.log.metadata.manager.listener.name = null + remote.log.reader.max.pending.tasks = 100 + remote.log.reader.threads = 10 + remote.log.storage.manager.class.name = null + remote.log.storage.manager.class.path = null + remote.log.storage.manager.impl.prefix = rsm.config. + remote.log.storage.system.enable = false + replica.fetch.backoff.ms = 1000 + replica.fetch.max.bytes = 1048576 + replica.fetch.min.bytes = 1 + replica.fetch.response.max.bytes = 10485760 + replica.fetch.wait.max.ms = 500 + replica.high.watermark.checkpoint.interval.ms = 5000 + replica.lag.time.max.ms = 30000 + replica.selector.class = null + replica.socket.receive.buffer.bytes = 65536 + replica.socket.timeout.ms = 30000 + replication.quota.window.num = 11 + replication.quota.window.size.seconds = 1 + request.timeout.ms = 30000 + reserved.broker.max.id = 1000 + sasl.client.callback.handler.class = null + sasl.enabled.mechanisms = [GSSAPI] + sasl.jaas.config = null + sasl.kerberos.kinit.cmd = /usr/bin/kinit + sasl.kerberos.min.time.before.relogin = 60000 + sasl.kerberos.principal.to.local.rules = [DEFAULT] + sasl.kerberos.service.name = null + sasl.kerberos.ticket.renew.jitter = 0.05 + sasl.kerberos.ticket.renew.window.factor = 0.8 + sasl.login.callback.handler.class = null + sasl.login.class = null + sasl.login.connect.timeout.ms = null + sasl.login.read.timeout.ms = null + sasl.login.refresh.buffer.seconds = 300 + sasl.login.refresh.min.period.seconds = 60 + sasl.login.refresh.window.factor = 0.8 + sasl.login.refresh.window.jitter = 0.05 + sasl.login.retry.backoff.max.ms = 10000 + sasl.login.retry.backoff.ms = 100 + sasl.mechanism.controller.protocol = GSSAPI + sasl.mechanism.inter.broker.protocol = GSSAPI + sasl.oauthbearer.clock.skew.seconds = 30 + sasl.oauthbearer.expected.audience = null + sasl.oauthbearer.expected.issuer = null + sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 + sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 + sasl.oauthbearer.jwks.endpoint.url = null + sasl.oauthbearer.scope.claim.name = scope + sasl.oauthbearer.sub.claim.name = sub + sasl.oauthbearer.token.endpoint.url = null + sasl.server.callback.handler.class = null + sasl.server.max.receive.size = 524288 + security.inter.broker.protocol = PLAINTEXT + security.providers = null + server.max.startup.time.ms = 9223372036854775807 + socket.connection.setup.timeout.max.ms = 30000 + socket.connection.setup.timeout.ms = 10000 + socket.listen.backlog.size = 50 + socket.receive.buffer.bytes = 102400 + socket.request.max.bytes = 104857600 + socket.send.buffer.bytes = 102400 + ssl.cipher.suites = [] + ssl.client.auth = none + ssl.enabled.protocols = [TLSv1.2, TLSv1.3] + ssl.endpoint.identification.algorithm = https + ssl.engine.factory.class = null + ssl.key.password = null + ssl.keymanager.algorithm = SunX509 + ssl.keystore.certificate.chain = null + ssl.keystore.key = null + ssl.keystore.location = null + ssl.keystore.password = null + ssl.keystore.type = JKS + ssl.principal.mapping.rules = DEFAULT + ssl.protocol = TLSv1.3 + ssl.provider = null + ssl.secure.random.implementation = null + ssl.trustmanager.algorithm = PKIX + ssl.truststore.certificates = null + ssl.truststore.location = null + ssl.truststore.password = null + ssl.truststore.type = JKS + transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 + transaction.max.timeout.ms = 900000 + transaction.partition.verification.enable = true + transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 + transaction.state.log.load.buffer.size = 5242880 + transaction.state.log.min.isr = 1 + transaction.state.log.num.partitions = 50 + transaction.state.log.replication.factor = 1 + transaction.state.log.segment.bytes = 104857600 + transactional.id.expiration.ms = 604800000 + unclean.leader.election.enable = false + unstable.api.versions.enable = false + zookeeper.clientCnxnSocket = null + zookeeper.connect = localhost:2181 + zookeeper.connection.timeout.ms = 18000 + zookeeper.max.in.flight.requests = 10 + zookeeper.metadata.migration.enable = false + zookeeper.session.timeout.ms = 18000 + zookeeper.set.acl = false + zookeeper.ssl.cipher.suites = null + zookeeper.ssl.client.enable = false + zookeeper.ssl.crl.enable = false + zookeeper.ssl.enabled.protocols = null + zookeeper.ssl.endpoint.identification.algorithm = HTTPS + zookeeper.ssl.keystore.location = null + zookeeper.ssl.keystore.password = null + zookeeper.ssl.keystore.type = null + zookeeper.ssl.ocsp.enable = false + zookeeper.ssl.protocol = TLSv1.2 + zookeeper.ssl.truststore.location = null + zookeeper.ssl.truststore.password = null + zookeeper.ssl.truststore.type = null + (kafka.server.KafkaConfig) +[2023-11-03 14:01:22,974] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:01:22,974] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:01:22,975] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:01:22,977] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:01:22,985] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager) +[2023-11-03 14:01:22,994] INFO Loading logs from log dirs ArraySeq(/tmp/kafka-logs) (kafka.log.LogManager) +[2023-11-03 14:01:22,997] INFO No logs found to be loaded in /tmp/kafka-logs (kafka.log.LogManager) +[2023-11-03 14:01:23,002] INFO Loaded 0 logs in 8ms (kafka.log.LogManager) +[2023-11-03 14:01:23,003] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) +[2023-11-03 14:01:23,004] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) +[2023-11-03 14:01:23,029] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 14:01:23,037] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 14:01:23,043] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) +[2023-11-03 14:01:23,069] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:01:23,248] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) +[2023-11-03 14:01:23,259] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) +[2023-11-03 14:01:23,262] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:01:23,277] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,277] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,278] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,278] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,279] INFO [ExpirationReaper-0-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,287] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 14:01:23,287] INFO [AddPartitionsToTxnSenderThread-0]: Starting (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 14:01:23,308] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) +[2023-11-03 14:01:23,321] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1699034483317,1699034483317,1,0,0,72057607743537152,202,0,25 + (kafka.zk.KafkaZkClient) +[2023-11-03 14:01:23,322] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient) +[2023-11-03 14:01:23,357] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,363] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,364] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,364] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) +[2023-11-03 14:01:23,375] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:01:23,375] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) +[2023-11-03 14:01:23,378] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:01:23,389] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 14:01:23,390] INFO [MetadataCache brokerId=0] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) +[2023-11-03 14:01:23,392] INFO [TxnMarkerSenderThread-0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) +[2023-11-03 14:01:23,392] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 14:01:23,416] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:01:23,430] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) +[2023-11-03 14:01:23,441] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer) +[2023-11-03 14:01:23,443] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) +[2023-11-03 14:01:23,451] INFO [Controller id=0, targetBrokerId=0] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 14:01:23,453] INFO Kafka version: 3.6.0 (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 14:01:23,453] INFO Kafka commitId: 60e845626d8a465a (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 14:01:23,453] INFO Kafka startTimeMs: 1699034483445 (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 14:01:23,453] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 14:01:23,454] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) +[2023-11-03 14:01:23,455] INFO [Controller id=0, targetBrokerId=0] Client requested connection close from node 0 (org.apache.kafka.clients.NetworkClient) +[2023-11-03 14:01:23,664] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:01:23,673] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:23:03,083] INFO Creating topic OrderEventQA2 with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient) +[2023-11-03 14:23:03,124] INFO [Controller id=0, targetBrokerId=0] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) +[2023-11-03 14:23:03,147] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(OrderEventQA2-0) (kafka.server.ReplicaFetcherManager) +[2023-11-03 14:23:03,175] INFO [LogLoader partition=OrderEventQA2-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:03,183] INFO Created log for partition OrderEventQA2-0 in /tmp/kafka-logs/OrderEventQA2-0 with properties {} (kafka.log.LogManager) +[2023-11-03 14:23:03,183] INFO [Partition OrderEventQA2-0 broker=0] No checkpointed highwatermark is found for partition OrderEventQA2-0 (kafka.cluster.Partition) +[2023-11-03 14:23:03,184] INFO [Partition OrderEventQA2-0 broker=0] Log loaded for partition OrderEventQA2-0 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,124] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 23 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 49 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient) +[2023-11-03 14:23:04,193] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) +[2023-11-03 14:23:04,198] INFO [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,198] INFO Created log for partition __consumer_offsets-3 in /tmp/kafka-logs/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,199] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) +[2023-11-03 14:23:04,199] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,202] INFO [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,202] INFO Created log for partition __consumer_offsets-18 in /tmp/kafka-logs/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,203] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) +[2023-11-03 14:23:04,203] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,206] INFO [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,207] INFO Created log for partition __consumer_offsets-41 in /tmp/kafka-logs/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,207] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) +[2023-11-03 14:23:04,207] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,210] INFO [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,211] INFO Created log for partition __consumer_offsets-10 in /tmp/kafka-logs/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,211] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) +[2023-11-03 14:23:04,211] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,214] INFO [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,215] INFO Created log for partition __consumer_offsets-33 in /tmp/kafka-logs/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,215] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) +[2023-11-03 14:23:04,215] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,218] INFO [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,219] INFO Created log for partition __consumer_offsets-48 in /tmp/kafka-logs/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,219] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) +[2023-11-03 14:23:04,219] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,223] INFO [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,223] INFO Created log for partition __consumer_offsets-19 in /tmp/kafka-logs/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,223] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) +[2023-11-03 14:23:04,223] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,227] INFO [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,227] INFO Created log for partition __consumer_offsets-34 in /tmp/kafka-logs/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,227] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) +[2023-11-03 14:23:04,227] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,230] INFO [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,231] INFO Created log for partition __consumer_offsets-4 in /tmp/kafka-logs/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,231] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) +[2023-11-03 14:23:04,231] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,234] INFO [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,235] INFO Created log for partition __consumer_offsets-11 in /tmp/kafka-logs/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,235] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) +[2023-11-03 14:23:04,235] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,238] INFO [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,239] INFO Created log for partition __consumer_offsets-26 in /tmp/kafka-logs/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,239] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) +[2023-11-03 14:23:04,239] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,242] INFO [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,242] INFO Created log for partition __consumer_offsets-49 in /tmp/kafka-logs/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,242] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) +[2023-11-03 14:23:04,243] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,246] INFO [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,246] INFO Created log for partition __consumer_offsets-39 in /tmp/kafka-logs/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,246] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) +[2023-11-03 14:23:04,246] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,249] INFO [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,250] INFO Created log for partition __consumer_offsets-9 in /tmp/kafka-logs/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,250] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) +[2023-11-03 14:23:04,250] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,253] INFO [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,253] INFO Created log for partition __consumer_offsets-24 in /tmp/kafka-logs/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,253] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) +[2023-11-03 14:23:04,253] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,256] INFO [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,257] INFO Created log for partition __consumer_offsets-31 in /tmp/kafka-logs/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,257] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) +[2023-11-03 14:23:04,257] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,259] INFO [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,260] INFO Created log for partition __consumer_offsets-46 in /tmp/kafka-logs/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,260] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) +[2023-11-03 14:23:04,260] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,263] INFO [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,263] INFO Created log for partition __consumer_offsets-1 in /tmp/kafka-logs/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,264] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) +[2023-11-03 14:23:04,264] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,266] INFO [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,266] INFO Created log for partition __consumer_offsets-16 in /tmp/kafka-logs/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,267] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) +[2023-11-03 14:23:04,267] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,269] INFO [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,270] INFO Created log for partition __consumer_offsets-2 in /tmp/kafka-logs/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,270] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) +[2023-11-03 14:23:04,270] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,273] INFO [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,273] INFO Created log for partition __consumer_offsets-25 in /tmp/kafka-logs/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,273] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) +[2023-11-03 14:23:04,273] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,276] INFO [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,276] INFO Created log for partition __consumer_offsets-40 in /tmp/kafka-logs/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,276] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) +[2023-11-03 14:23:04,276] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,279] INFO [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,280] INFO Created log for partition __consumer_offsets-47 in /tmp/kafka-logs/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,280] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) +[2023-11-03 14:23:04,280] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,282] INFO [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,283] INFO Created log for partition __consumer_offsets-17 in /tmp/kafka-logs/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,283] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) +[2023-11-03 14:23:04,283] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,286] INFO [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,286] INFO Created log for partition __consumer_offsets-32 in /tmp/kafka-logs/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,286] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) +[2023-11-03 14:23:04,286] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,289] INFO [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,289] INFO Created log for partition __consumer_offsets-37 in /tmp/kafka-logs/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,289] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) +[2023-11-03 14:23:04,289] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,292] INFO [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,292] INFO Created log for partition __consumer_offsets-7 in /tmp/kafka-logs/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,293] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) +[2023-11-03 14:23:04,293] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,295] INFO [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,295] INFO Created log for partition __consumer_offsets-22 in /tmp/kafka-logs/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,295] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) +[2023-11-03 14:23:04,296] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,298] INFO [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,299] INFO Created log for partition __consumer_offsets-29 in /tmp/kafka-logs/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,299] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) +[2023-11-03 14:23:04,299] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,301] INFO [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,302] INFO Created log for partition __consumer_offsets-44 in /tmp/kafka-logs/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,302] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) +[2023-11-03 14:23:04,302] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,304] INFO [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,304] INFO Created log for partition __consumer_offsets-14 in /tmp/kafka-logs/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,305] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) +[2023-11-03 14:23:04,305] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,308] INFO [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,309] INFO Created log for partition __consumer_offsets-23 in /tmp/kafka-logs/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,309] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) +[2023-11-03 14:23:04,309] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,311] INFO [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,312] INFO Created log for partition __consumer_offsets-38 in /tmp/kafka-logs/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,312] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) +[2023-11-03 14:23:04,312] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,315] INFO [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,315] INFO Created log for partition __consumer_offsets-8 in /tmp/kafka-logs/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,316] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) +[2023-11-03 14:23:04,316] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,319] INFO [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,320] INFO Created log for partition __consumer_offsets-45 in /tmp/kafka-logs/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,320] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) +[2023-11-03 14:23:04,320] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,323] INFO [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,323] INFO Created log for partition __consumer_offsets-15 in /tmp/kafka-logs/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,323] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) +[2023-11-03 14:23:04,323] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,326] INFO [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,326] INFO Created log for partition __consumer_offsets-30 in /tmp/kafka-logs/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,326] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) +[2023-11-03 14:23:04,326] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,329] INFO [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,329] INFO Created log for partition __consumer_offsets-0 in /tmp/kafka-logs/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,329] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,329] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,332] INFO [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,332] INFO Created log for partition __consumer_offsets-35 in /tmp/kafka-logs/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,332] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) +[2023-11-03 14:23:04,332] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,335] INFO [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,335] INFO Created log for partition __consumer_offsets-5 in /tmp/kafka-logs/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,335] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) +[2023-11-03 14:23:04,335] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,338] INFO [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,338] INFO Created log for partition __consumer_offsets-20 in /tmp/kafka-logs/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,338] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) +[2023-11-03 14:23:04,338] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,340] INFO [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,341] INFO Created log for partition __consumer_offsets-27 in /tmp/kafka-logs/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,341] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) +[2023-11-03 14:23:04,341] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,343] INFO [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,343] INFO Created log for partition __consumer_offsets-42 in /tmp/kafka-logs/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,343] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) +[2023-11-03 14:23:04,343] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,346] INFO [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,346] INFO Created log for partition __consumer_offsets-12 in /tmp/kafka-logs/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,346] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) +[2023-11-03 14:23:04,346] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,348] INFO [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,348] INFO Created log for partition __consumer_offsets-21 in /tmp/kafka-logs/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,349] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) +[2023-11-03 14:23:04,349] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,351] INFO [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,351] INFO Created log for partition __consumer_offsets-36 in /tmp/kafka-logs/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,351] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) +[2023-11-03 14:23:04,351] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,354] INFO [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,354] INFO Created log for partition __consumer_offsets-6 in /tmp/kafka-logs/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,354] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) +[2023-11-03 14:23:04,354] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,356] INFO [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,357] INFO Created log for partition __consumer_offsets-43 in /tmp/kafka-logs/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,357] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) +[2023-11-03 14:23:04,357] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,359] INFO [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,359] INFO Created log for partition __consumer_offsets-13 in /tmp/kafka-logs/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,359] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) +[2023-11-03 14:23:04,359] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,362] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) +[2023-11-03 14:23:04,362] INFO Created log for partition __consumer_offsets-28 in /tmp/kafka-logs/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) +[2023-11-03 14:23:04,362] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) +[2023-11-03 14:23:04,362] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) +[2023-11-03 14:23:04,363] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,363] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,364] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,364] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,365] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:04,365] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,368] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 3 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,368] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,368] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,368] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,368] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,369] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,370] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,371] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,372] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,373] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:04,373] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) +[2023-11-03 14:23:05,123] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in Empty state. Created a new member id rdkafka-647c1d45-7ee9-44f3-bb43-da525071d1e7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:05,129] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member rdkafka-647c1d45-7ee9-44f3-bb43-da525071d1e7 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:05,143] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 1 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:05,148] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-647c1d45-7ee9-44f3-bb43-da525071d1e7 for group ConsumerGroup01 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:54,677] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in Stable state. Created a new member id rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:23:54,678] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 1 (__consumer_offsets-28) (reason: Adding new member rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:24:24,613] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in PreparingRebalance state. Created a new member id rdkafka-776e9df4-00c7-4e7d-a711-16ab5fe56213 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:24:32,658] INFO [GroupCoordinator 0]: Member rdkafka-647c1d45-7ee9-44f3-bb43-da525071d1e7 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:24:32,659] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 2 (__consumer_offsets-28) with 2 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:24:32,660] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 for group ConsumerGroup01 for generation 2. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:25:38,662] INFO [GroupCoordinator 0]: Member rdkafka-776e9df4-00c7-4e7d-a711-16ab5fe56213 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:25:38,663] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 2 (__consumer_offsets-28) (reason: removing member rdkafka-776e9df4-00c7-4e7d-a711-16ab5fe56213 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:25:38,666] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 3 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:25:38,667] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 for group ConsumerGroup01 for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:00,256] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in Stable state. Created a new member id rdkafka-1f0745b4-626b-4168-83dc-7ef55344a53e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:00,256] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 3 (__consumer_offsets-28) (reason: Adding new member rdkafka-1f0745b4-626b-4168-83dc-7ef55344a53e with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:02,672] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 4 (__consumer_offsets-28) with 2 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:02,673] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 for group ConsumerGroup01 for generation 4. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:15,062] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in Stable state. Created a new member id rdkafka-832e05cb-7b6a-4b1b-a2c9-76dd3132a5c6 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:15,063] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 4 (__consumer_offsets-28) (reason: Adding new member rdkafka-832e05cb-7b6a-4b1b-a2c9-76dd3132a5c6 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:21,698] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in PreparingRebalance state. Created a new member id rdkafka-74f25a3a-d5ab-4fa0-b1dc-b0c59094594a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:36,431] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in PreparingRebalance state. Created a new member id rdkafka-43bcacdc-d7c3-4808-b6ad-87452f80e9d5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:47,675] INFO [GroupCoordinator 0]: Member rdkafka-1f0745b4-626b-4168-83dc-7ef55344a53e in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:47,676] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 5 (__consumer_offsets-28) with 4 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:59,773] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in CompletingRebalance state. Created a new member id rdkafka-36fa73ed-f065-4359-9c19-f382381f4a88 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:26:59,775] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 5 (__consumer_offsets-28) (reason: Adding new member rdkafka-36fa73ed-f065-4359-9c19-f382381f4a88 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:32,677] INFO [GroupCoordinator 0]: Member rdkafka-1a91f36d-de43-46ba-a289-f67a8fda8359 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:32,677] INFO [GroupCoordinator 0]: Member rdkafka-832e05cb-7b6a-4b1b-a2c9-76dd3132a5c6 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:44,775] INFO [GroupCoordinator 0]: Member rdkafka-43bcacdc-d7c3-4808-b6ad-87452f80e9d5 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:44,776] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 6 (__consumer_offsets-28) with 2 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:49,017] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in CompletingRebalance state. Created a new member id rdkafka-3a637c2c-a2de-44db-acf1-39f46b019eae and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:49,018] INFO [GroupCoordinator 0]: Preparing to rebalance group ConsumerGroup01 in state PreparingRebalance with old generation 6 (__consumer_offsets-28) (reason: Adding new member rdkafka-3a637c2c-a2de-44db-acf1-39f46b019eae with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:27:51,250] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group ConsumerGroup01 in PreparingRebalance state. Created a new member id rdkafka-34655fad-c217-4fa4-9727-c9c5d201e5b3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:28:29,776] INFO [GroupCoordinator 0]: Member rdkafka-74f25a3a-d5ab-4fa0-b1dc-b0c59094594a in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:28:34,019] INFO [GroupCoordinator 0]: Member rdkafka-36fa73ed-f065-4359-9c19-f382381f4a88 in group ConsumerGroup01 has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:28:34,019] INFO [GroupCoordinator 0]: Stabilized group ConsumerGroup01 generation 7 (__consumer_offsets-28) with 2 members (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:28:34,020] INFO [GroupCoordinator 0]: Assignment received from leader rdkafka-34655fad-c217-4fa4-9727-c9c5d201e5b3 for group ConsumerGroup01 for generation 7. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:29:30,475] WARN Session 0x100000330eb0000 for server localhost/[0:0:0:0:0:0:0:1]:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +EndOfStreamException: Unable to read additional data from server sessionid 0x100000330eb0000, likely server has closed socket + at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 14:29:32,156] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:29:32,156] WARN Session 0x100000330eb0000 for server localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn) +java.net.ConnectException: Connection refused + at java.base/sun.nio.ch.Net.pollConnect(Native Method) + at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672) + at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946) + at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344) + at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) +[2023-11-03 14:29:33,643] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler) +[2023-11-03 14:29:33,645] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer) +[2023-11-03 14:29:33,646] INFO [KafkaServer id=0] Starting controlled shutdown (kafka.server.KafkaServer) +[2023-11-03 14:29:33,655] INFO [KafkaServer id=0] Controlled shutdown request returned successfully after 6ms (kafka.server.KafkaServer) +[2023-11-03 14:29:33,657] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) +[2023-11-03 14:29:33,658] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) +[2023-11-03 14:29:33,658] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) +[2023-11-03 14:29:33,658] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Stopping socket server request processors (kafka.network.SocketServer) +[2023-11-03 14:29:33,662] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Stopped socket server request processors (kafka.network.SocketServer) +[2023-11-03 14:29:33,662] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool) +[2023-11-03 14:29:33,663] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool) +[2023-11-03 14:29:33,664] INFO [ExpirationReaper-0-AlterAcls]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,665] INFO [ExpirationReaper-0-AlterAcls]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,665] INFO [ExpirationReaper-0-AlterAcls]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,665] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis) +[2023-11-03 14:29:33,666] INFO [ExpirationReaper-0-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,666] INFO [ExpirationReaper-0-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,666] INFO [ExpirationReaper-0-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,667] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 14:29:33,667] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager) +[2023-11-03 14:29:33,667] INFO [TxnMarkerSenderThread-0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager) +[2023-11-03 14:29:33,667] INFO [TxnMarkerSenderThread-0]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager) +[2023-11-03 14:29:33,667] INFO [TxnMarkerSenderThread-0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager) +[2023-11-03 14:29:33,668] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator) +[2023-11-03 14:29:33,668] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:29:33,668] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,668] INFO [ExpirationReaper-0-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,668] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,668] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,669] INFO [ExpirationReaper-0-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,669] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,669] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator) +[2023-11-03 14:29:33,669] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager) +[2023-11-03 14:29:33,670] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 14:29:33,670] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 14:29:33,670] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler) +[2023-11-03 14:29:33,670] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager) +[2023-11-03 14:29:33,670] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager) +[2023-11-03 14:29:33,671] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager) +[2023-11-03 14:29:33,671] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-RemoteFetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-RemoteFetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,671] INFO [ExpirationReaper-0-RemoteFetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,672] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,673] INFO [ExpirationReaper-0-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,673] INFO [ExpirationReaper-0-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,673] INFO [ExpirationReaper-0-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) +[2023-11-03 14:29:33,674] INFO [AddPartitionsToTxnSenderThread-0]: Shutting down (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 14:29:33,674] INFO [AddPartitionsToTxnSenderThread-0]: Stopped (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 14:29:33,674] INFO [AddPartitionsToTxnSenderThread-0]: Shutdown completed (kafka.server.AddPartitionsToTxnManager) +[2023-11-03 14:29:33,675] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager) +[2023-11-03 14:29:33,675] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,675] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Stopped (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,675] INFO [zk-broker-0-to-controller-alter-partition-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,676] INFO Broker to controller channel manager for alter-partition shutdown (kafka.server.BrokerToControllerChannelManagerImpl) +[2023-11-03 14:29:33,676] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Shutting down (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,676] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Stopped (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,676] INFO [zk-broker-0-to-controller-forwarding-channel-manager]: Shutdown completed (kafka.server.BrokerToControllerRequestThread) +[2023-11-03 14:29:33,676] INFO Broker to controller channel manager for forwarding shutdown (kafka.server.BrokerToControllerChannelManagerImpl) +[2023-11-03 14:29:33,676] INFO Shutting down. (kafka.log.LogManager) +[2023-11-03 14:29:33,677] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 14:29:33,677] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 14:29:33,677] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner$CleanerThread) +[2023-11-03 14:29:33,692] INFO [ProducerStateManager partition=OrderEventQA2-0]Wrote producer snapshot at offset 10 with 0 producer ids in 1 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 14:29:33,696] INFO [ProducerStateManager partition=__consumer_offsets-28]Wrote producer snapshot at offset 11 with 0 producer ids in 1 ms. (org.apache.kafka.storage.internals.log.ProducerStateManager) +[2023-11-03 14:29:33,699] INFO Shutdown complete. (kafka.log.LogManager) +[2023-11-03 14:29:33,702] INFO [feature-zk-node-event-process-thread]: Shutting down (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 14:29:33,702] INFO [feature-zk-node-event-process-thread]: Stopped (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 14:29:33,702] INFO [feature-zk-node-event-process-thread]: Shutdown completed (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) +[2023-11-03 14:29:33,702] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 14:29:33,852] INFO Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181. (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:29:33,954] INFO Session: 0x100000330eb0000 closed (org.apache.zookeeper.ZooKeeper) +[2023-11-03 14:29:33,954] INFO EventThread shut down for session: 0x100000330eb0000 (org.apache.zookeeper.ClientCnxn) +[2023-11-03 14:29:33,955] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient) +[2023-11-03 14:29:33,955] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-ControllerMutation]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-ControllerMutation]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,956] INFO [ThrottledChannelReaper-ControllerMutation]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper) +[2023-11-03 14:29:33,957] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Shutting down socket server (kafka.network.SocketServer) +[2023-11-03 14:29:33,965] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Shutdown completed (kafka.network.SocketServer) +[2023-11-03 14:29:33,965] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) +[2023-11-03 14:29:33,965] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) +[2023-11-03 14:29:33,965] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) +[2023-11-03 14:29:33,966] INFO Broker and topic stats closed (kafka.server.BrokerTopicStats) +[2023-11-03 14:29:33,966] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser) +[2023-11-03 14:29:33,966] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer) diff --git a/logs/server.log.2023-11-03-14 b/logs/server.log.2023-11-03-14 new file mode 100644 index 0000000..cdf913e --- /dev/null +++ b/logs/server.log.2023-11-03-14 @@ -0,0 +1 @@ +[2023-11-03 14:01:22,743] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) diff --git a/logs/state-change.log b/logs/state-change.log new file mode 100644 index 0000000..0768c0c --- /dev/null +++ b/logs/state-change.log @@ -0,0 +1,70 @@ +[2023-11-03 15:24:54,500] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet(0) for 0 partitions (state.change.logger) +[2023-11-03 15:24:54,528] INFO [Controller id=0 epoch=2] Sending LeaderAndIsr request to broker 0 with 51 become-leader and 0 become-follower partitions (state.change.logger) +[2023-11-03 15:24:54,530] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet(0) for 51 partitions (state.change.logger) +[2023-11-03 15:24:54,662] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 for 51 partitions (state.change.logger) +[2023-11-03 15:24:54,692] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 0 epoch 2 as part of the become-leader transition for 51 partitions (state.change.logger) +[2023-11-03 15:24:54,700] INFO [Broker id=0] Leader __consumer_offsets-3 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,702] INFO [Broker id=0] Leader __consumer_offsets-18 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,703] INFO [Broker id=0] Leader __consumer_offsets-41 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,703] INFO [Broker id=0] Leader __consumer_offsets-10 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,703] INFO [Broker id=0] Leader __consumer_offsets-33 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,704] INFO [Broker id=0] Leader __consumer_offsets-48 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,704] INFO [Broker id=0] Leader __consumer_offsets-19 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,705] INFO [Broker id=0] Leader __consumer_offsets-34 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,705] INFO [Broker id=0] Leader __consumer_offsets-4 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,706] INFO [Broker id=0] Leader __consumer_offsets-11 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,706] INFO [Broker id=0] Leader __consumer_offsets-26 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,707] INFO [Broker id=0] Leader __consumer_offsets-49 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,707] INFO [Broker id=0] Leader __consumer_offsets-39 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,708] INFO [Broker id=0] Leader __consumer_offsets-9 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,708] INFO [Broker id=0] Leader __consumer_offsets-24 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,708] INFO [Broker id=0] Leader __consumer_offsets-31 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,709] INFO [Broker id=0] Leader __consumer_offsets-46 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,709] INFO [Broker id=0] Leader __consumer_offsets-1 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,710] INFO [Broker id=0] Leader __consumer_offsets-16 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,710] INFO [Broker id=0] Leader __consumer_offsets-2 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,710] INFO [Broker id=0] Leader __consumer_offsets-25 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,711] INFO [Broker id=0] Leader __consumer_offsets-40 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,711] INFO [Broker id=0] Leader __consumer_offsets-47 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,712] INFO [Broker id=0] Leader __consumer_offsets-17 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,712] INFO [Broker id=0] Leader __consumer_offsets-32 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,713] INFO [Broker id=0] Leader __consumer_offsets-37 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,713] INFO [Broker id=0] Leader __consumer_offsets-7 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,714] INFO [Broker id=0] Leader __consumer_offsets-22 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,714] INFO [Broker id=0] Leader __consumer_offsets-29 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,715] INFO [Broker id=0] Leader __consumer_offsets-44 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,715] INFO [Broker id=0] Leader __consumer_offsets-14 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,716] INFO [Broker id=0] Leader __consumer_offsets-23 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,716] INFO [Broker id=0] Leader __consumer_offsets-38 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,716] INFO [Broker id=0] Leader __consumer_offsets-8 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,717] INFO [Broker id=0] Leader __consumer_offsets-45 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,717] INFO [Broker id=0] Leader __consumer_offsets-15 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,718] INFO [Broker id=0] Leader __consumer_offsets-30 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,718] INFO [Broker id=0] Leader __consumer_offsets-0 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,719] INFO [Broker id=0] Leader __consumer_offsets-35 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,719] INFO [Broker id=0] Leader __consumer_offsets-5 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,720] INFO [Broker id=0] Leader __consumer_offsets-20 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,720] INFO [Broker id=0] Leader __consumer_offsets-27 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,720] INFO [Broker id=0] Leader __consumer_offsets-42 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,721] INFO [Broker id=0] Leader __consumer_offsets-12 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,722] INFO [Broker id=0] Leader __consumer_offsets-21 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,722] INFO [Broker id=0] Leader __consumer_offsets-36 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,723] INFO [Broker id=0] Leader OrderEventQA2-0 with topic id Some(78sflbJnR2GXTOOat5Yo7Q) starts at leader epoch 0 from offset 10 with partition epoch 0, high watermark 10, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,723] INFO [Broker id=0] Leader __consumer_offsets-6 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,723] INFO [Broker id=0] Leader __consumer_offsets-43 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,724] INFO [Broker id=0] Leader __consumer_offsets-13 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,724] INFO [Broker id=0] Leader __consumer_offsets-28 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 11 with partition epoch 0, high watermark 11, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:24:54,740] INFO [Broker id=0] Finished LeaderAndIsr request in 78ms correlationId 1 from controller 0 for 51 partitions (state.change.logger) +[2023-11-03 15:24:54,750] INFO [Broker id=0] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 2 with correlation id 2 (state.change.logger) +[2023-11-03 15:36:27,549] INFO [Controller id=0 epoch=2] Changed partition test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 15:36:27,549] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 15:36:27,552] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 15:36:27,575] INFO [Controller id=0 epoch=2] Changed partition test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 15:36:27,576] INFO [Controller id=0 epoch=2] Sending LeaderAndIsr request to broker 0 with 1 become-leader and 0 become-follower partitions (state.change.logger) +[2023-11-03 15:36:27,576] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet(0) for 1 partitions (state.change.logger) +[2023-11-03 15:36:27,578] INFO [Controller id=0 epoch=2] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 15:36:27,582] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 3 from controller 0 for 1 partitions (state.change.logger) +[2023-11-03 15:36:27,583] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 0 epoch 2 as part of the become-leader transition for 1 partitions (state.change.logger) +[2023-11-03 15:36:27,601] INFO [Broker id=0] Leader test-topic-0 with topic id Some(Gh__wBw-TqeS8XzMJZBzeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 15:36:27,603] INFO [Broker id=0] Finished LeaderAndIsr request in 22ms correlationId 3 from controller 0 for 1 partitions (state.change.logger) +[2023-11-03 15:36:27,607] INFO [Broker id=0] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 2 with correlation id 4 (state.change.logger) diff --git a/logs/state-change.log.2023-11-03-10 b/logs/state-change.log.2023-11-03-10 new file mode 100644 index 0000000..e69de29 diff --git a/logs/state-change.log.2023-11-03-14 b/logs/state-change.log.2023-11-03-14 new file mode 100644 index 0000000..d42202b --- /dev/null +++ b/logs/state-change.log.2023-11-03-14 @@ -0,0 +1,173 @@ +[2023-11-03 14:01:23,428] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 0 partitions (state.change.logger) +[2023-11-03 14:23:03,098] INFO [Controller id=0 epoch=1] Changed partition OrderEventQA2-0 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:03,098] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:03,100] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:03,113] INFO [Controller id=0 epoch=1] Changed partition OrderEventQA2-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:03,122] INFO [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 1 become-leader and 0 become-follower partitions (state.change.logger) +[2023-11-03 14:23:03,124] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 1 partitions (state.change.logger) +[2023-11-03 14:23:03,125] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:03,136] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 for 1 partitions (state.change.logger) +[2023-11-03 14:23:03,147] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 0 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) +[2023-11-03 14:23:03,186] INFO [Broker id=0] Leader OrderEventQA2-0 with topic id Some(78sflbJnR2GXTOOat5Yo7Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:03,192] INFO [Broker id=0] Finished LeaderAndIsr request in 62ms correlationId 1 from controller 0 for 1 partitions (state.change.logger) +[2023-11-03 14:23:03,197] INFO [Broker id=0] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 2 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,134] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) +[2023-11-03 14:23:04,135] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:04,137] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:04,173] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,174] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,175] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=0, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) +[2023-11-03 14:23:04,176] INFO [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 50 become-leader and 0 become-follower partitions (state.change.logger) +[2023-11-03 14:23:04,176] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 50 partitions (state.change.logger) +[2023-11-03 14:23:04,177] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) +[2023-11-03 14:23:04,178] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 3 from controller 0 for 50 partitions (state.change.logger) +[2023-11-03 14:23:04,194] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 0 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) +[2023-11-03 14:23:04,199] INFO [Broker id=0] Leader __consumer_offsets-3 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,203] INFO [Broker id=0] Leader __consumer_offsets-18 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,207] INFO [Broker id=0] Leader __consumer_offsets-41 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,211] INFO [Broker id=0] Leader __consumer_offsets-10 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,215] INFO [Broker id=0] Leader __consumer_offsets-33 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,219] INFO [Broker id=0] Leader __consumer_offsets-48 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,224] INFO [Broker id=0] Leader __consumer_offsets-19 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,227] INFO [Broker id=0] Leader __consumer_offsets-34 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,231] INFO [Broker id=0] Leader __consumer_offsets-4 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,235] INFO [Broker id=0] Leader __consumer_offsets-11 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,239] INFO [Broker id=0] Leader __consumer_offsets-26 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,243] INFO [Broker id=0] Leader __consumer_offsets-49 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,246] INFO [Broker id=0] Leader __consumer_offsets-39 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,250] INFO [Broker id=0] Leader __consumer_offsets-9 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,253] INFO [Broker id=0] Leader __consumer_offsets-24 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,257] INFO [Broker id=0] Leader __consumer_offsets-31 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,260] INFO [Broker id=0] Leader __consumer_offsets-46 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,264] INFO [Broker id=0] Leader __consumer_offsets-1 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,267] INFO [Broker id=0] Leader __consumer_offsets-16 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,270] INFO [Broker id=0] Leader __consumer_offsets-2 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,273] INFO [Broker id=0] Leader __consumer_offsets-25 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,276] INFO [Broker id=0] Leader __consumer_offsets-40 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,280] INFO [Broker id=0] Leader __consumer_offsets-47 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,283] INFO [Broker id=0] Leader __consumer_offsets-17 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,287] INFO [Broker id=0] Leader __consumer_offsets-32 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,289] INFO [Broker id=0] Leader __consumer_offsets-37 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,293] INFO [Broker id=0] Leader __consumer_offsets-7 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,296] INFO [Broker id=0] Leader __consumer_offsets-22 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,299] INFO [Broker id=0] Leader __consumer_offsets-29 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,302] INFO [Broker id=0] Leader __consumer_offsets-44 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,305] INFO [Broker id=0] Leader __consumer_offsets-14 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,309] INFO [Broker id=0] Leader __consumer_offsets-23 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,312] INFO [Broker id=0] Leader __consumer_offsets-38 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,316] INFO [Broker id=0] Leader __consumer_offsets-8 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,320] INFO [Broker id=0] Leader __consumer_offsets-45 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,323] INFO [Broker id=0] Leader __consumer_offsets-15 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,326] INFO [Broker id=0] Leader __consumer_offsets-30 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,329] INFO [Broker id=0] Leader __consumer_offsets-0 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,332] INFO [Broker id=0] Leader __consumer_offsets-35 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,336] INFO [Broker id=0] Leader __consumer_offsets-5 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,338] INFO [Broker id=0] Leader __consumer_offsets-20 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,341] INFO [Broker id=0] Leader __consumer_offsets-27 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,343] INFO [Broker id=0] Leader __consumer_offsets-42 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,346] INFO [Broker id=0] Leader __consumer_offsets-12 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,349] INFO [Broker id=0] Leader __consumer_offsets-21 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,351] INFO [Broker id=0] Leader __consumer_offsets-36 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,354] INFO [Broker id=0] Leader __consumer_offsets-6 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,357] INFO [Broker id=0] Leader __consumer_offsets-43 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,360] INFO [Broker id=0] Leader __consumer_offsets-13 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,362] INFO [Broker id=0] Leader __consumer_offsets-28 with topic id Some(PIKFaFrMTbm2cK6klZ1I7A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) +[2023-11-03 14:23:04,365] INFO [Broker id=0] Finished LeaderAndIsr request in 187ms correlationId 3 from controller 0 for 50 partitions (state.change.logger) +[2023-11-03 14:23:04,367] INFO [Broker id=0] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 4 (state.change.logger) +[2023-11-03 14:29:33,653] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) diff --git a/logs/zookeeper-gc.log b/logs/zookeeper-gc.log new file mode 100644 index 0000000..63df2ae --- /dev/null +++ b/logs/zookeeper-gc.log @@ -0,0 +1,34 @@ +[2023-11-03T15:24:30.546-0400][gc] Using G1 +[2023-11-03T15:24:30.564-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:24:30.565-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:24:30.565-0400][gc,init] Memory: 63941M +[2023-11-03T15:24:30.565-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:24:30.565-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:24:30.565-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:24:30.565-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:24:30.565-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T15:24:30.565-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T15:24:30.565-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T15:24:30.565-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:24:30.565-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:24:30.565-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:24:30.565-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:24:30.565-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:24:30.565-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:24:30.565-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:24:30.565-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:24:35.132-0400][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T15:24:35.133-0400][gc,task ] GC(0) Using 10 workers of 10 for evacuation +[2023-11-03T15:24:35.141-0400][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T15:24:35.141-0400][gc,phases ] GC(0) Merge Heap Roots: 0.0ms +[2023-11-03T15:24:35.141-0400][gc,phases ] GC(0) Evacuate Collection Set: 7.6ms +[2023-11-03T15:24:35.141-0400][gc,phases ] GC(0) Post Evacuate Collection Set: 0.3ms +[2023-11-03T15:24:35.141-0400][gc,phases ] GC(0) Other: 1.0ms +[2023-11-03T15:24:35.141-0400][gc,heap ] GC(0) Eden regions: 25->0(21) +[2023-11-03T15:24:35.141-0400][gc,heap ] GC(0) Survivor regions: 0->4(4) +[2023-11-03T15:24:35.141-0400][gc,heap ] GC(0) Old regions: 0->4 +[2023-11-03T15:24:35.141-0400][gc,heap ] GC(0) Archive regions: 2->2 +[2023-11-03T15:24:35.141-0400][gc,heap ] GC(0) Humongous regions: 0->0 +[2023-11-03T15:24:35.141-0400][gc,metaspace] GC(0) Metaspace: 8123K(8320K)->8123K(8320K) NonClass: 7220K(7360K)->7220K(7360K) Class: 902K(960K)->902K(960K) +[2023-11-03T15:24:35.142-0400][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 25M->8M(512M) 9.133ms +[2023-11-03T15:24:35.142-0400][gc,cpu ] GC(0) User=0.07s Sys=0.00s Real=0.01s diff --git a/logs/zookeeper-gc.log.0 b/logs/zookeeper-gc.log.0 new file mode 100644 index 0000000..b7ed0e6 --- /dev/null +++ b/logs/zookeeper-gc.log.0 @@ -0,0 +1,24 @@ +[2023-11-03T10:39:05.543-0400][gc] Using G1 +[2023-11-03T10:39:05.547-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T10:39:05.547-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T10:39:05.547-0400][gc,init] Memory: 63941M +[2023-11-03T10:39:05.547-0400][gc,init] Large Page Support: Disabled +[2023-11-03T10:39:05.547-0400][gc,init] NUMA Support: Disabled +[2023-11-03T10:39:05.547-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T10:39:05.547-0400][gc,init] Heap Region Size: 1M +[2023-11-03T10:39:05.547-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T10:39:05.547-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T10:39:05.547-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T10:39:05.547-0400][gc,init] Pre-touch: Disabled +[2023-11-03T10:39:05.547-0400][gc,init] Parallel Workers: 10 +[2023-11-03T10:39:05.547-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T10:39:05.547-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T10:39:05.547-0400][gc,init] Periodic GC: Disabled +[2023-11-03T10:39:05.547-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T10:39:05.547-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T10:39:05.547-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T10:39:20.592-0400][gc,heap,exit] Heap +[2023-11-03T10:39:20.592-0400][gc,heap,exit] garbage-first heap total 524288K, used 25058K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T10:39:20.592-0400][gc,heap,exit] region size 1024K, 24 young (24576K), 0 survivors (0K) +[2023-11-03T10:39:20.592-0400][gc,heap,exit] Metaspace used 7726K, committed 7936K, reserved 1056768K +[2023-11-03T10:39:20.592-0400][gc,heap,exit] class space used 868K, committed 960K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.1 b/logs/zookeeper-gc.log.1 new file mode 100644 index 0000000..3a787ab --- /dev/null +++ b/logs/zookeeper-gc.log.1 @@ -0,0 +1,24 @@ +[2023-11-03T11:51:41.717-0400][gc] Using G1 +[2023-11-03T11:51:41.721-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T11:51:41.721-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T11:51:41.721-0400][gc,init] Memory: 63941M +[2023-11-03T11:51:41.721-0400][gc,init] Large Page Support: Disabled +[2023-11-03T11:51:41.721-0400][gc,init] NUMA Support: Disabled +[2023-11-03T11:51:41.721-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T11:51:41.721-0400][gc,init] Heap Region Size: 1M +[2023-11-03T11:51:41.721-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T11:51:41.721-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T11:51:41.721-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T11:51:41.721-0400][gc,init] Pre-touch: Disabled +[2023-11-03T11:51:41.721-0400][gc,init] Parallel Workers: 10 +[2023-11-03T11:51:41.721-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T11:51:41.721-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T11:51:41.721-0400][gc,init] Periodic GC: Disabled +[2023-11-03T11:51:41.721-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T11:51:41.721-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T11:51:41.721-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T11:51:53.681-0400][gc,heap,exit] Heap +[2023-11-03T11:51:53.681-0400][gc,heap,exit] garbage-first heap total 524288K, used 24546K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T11:51:53.681-0400][gc,heap,exit] region size 1024K, 24 young (24576K), 0 survivors (0K) +[2023-11-03T11:51:53.681-0400][gc,heap,exit] Metaspace used 7729K, committed 7872K, reserved 1056768K +[2023-11-03T11:51:53.681-0400][gc,heap,exit] class space used 870K, committed 960K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.2 b/logs/zookeeper-gc.log.2 new file mode 100644 index 0000000..1125956 --- /dev/null +++ b/logs/zookeeper-gc.log.2 @@ -0,0 +1,24 @@ +[2023-11-03T13:56:32.082-0400][gc] Using G1 +[2023-11-03T13:56:32.085-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T13:56:32.085-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T13:56:32.085-0400][gc,init] Memory: 63941M +[2023-11-03T13:56:32.085-0400][gc,init] Large Page Support: Disabled +[2023-11-03T13:56:32.085-0400][gc,init] NUMA Support: Disabled +[2023-11-03T13:56:32.085-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T13:56:32.085-0400][gc,init] Heap Region Size: 1M +[2023-11-03T13:56:32.085-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T13:56:32.085-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T13:56:32.085-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T13:56:32.085-0400][gc,init] Pre-touch: Disabled +[2023-11-03T13:56:32.085-0400][gc,init] Parallel Workers: 10 +[2023-11-03T13:56:32.085-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T13:56:32.085-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T13:56:32.085-0400][gc,init] Periodic GC: Disabled +[2023-11-03T13:56:32.085-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T13:56:32.085-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T13:56:32.085-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T13:56:34.268-0400][gc,heap,exit] Heap +[2023-11-03T13:56:34.268-0400][gc,heap,exit] garbage-first heap total 524288K, used 24544K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T13:56:34.268-0400][gc,heap,exit] region size 1024K, 23 young (23552K), 0 survivors (0K) +[2023-11-03T13:56:34.268-0400][gc,heap,exit] Metaspace used 7724K, committed 7936K, reserved 1056768K +[2023-11-03T13:56:34.268-0400][gc,heap,exit] class space used 868K, committed 960K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.3 b/logs/zookeeper-gc.log.3 new file mode 100644 index 0000000..d642e78 --- /dev/null +++ b/logs/zookeeper-gc.log.3 @@ -0,0 +1,39 @@ +[2023-11-03T13:58:59.707-0400][gc] Using G1 +[2023-11-03T13:58:59.710-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T13:58:59.710-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T13:58:59.710-0400][gc,init] Memory: 63941M +[2023-11-03T13:58:59.710-0400][gc,init] Large Page Support: Disabled +[2023-11-03T13:58:59.710-0400][gc,init] NUMA Support: Disabled +[2023-11-03T13:58:59.710-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T13:58:59.710-0400][gc,init] Heap Region Size: 1M +[2023-11-03T13:58:59.710-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T13:58:59.710-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T13:58:59.710-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T13:58:59.710-0400][gc,init] Pre-touch: Disabled +[2023-11-03T13:58:59.710-0400][gc,init] Parallel Workers: 10 +[2023-11-03T13:58:59.710-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T13:58:59.710-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T13:58:59.710-0400][gc,init] Periodic GC: Disabled +[2023-11-03T13:58:59.710-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T13:58:59.710-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T13:58:59.710-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T14:01:22.780-0400][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) +[2023-11-03T14:01:22.781-0400][gc,task ] GC(0) Using 10 workers of 10 for evacuation +[2023-11-03T14:01:22.789-0400][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.1ms +[2023-11-03T14:01:22.789-0400][gc,phases ] GC(0) Merge Heap Roots: 0.0ms +[2023-11-03T14:01:22.789-0400][gc,phases ] GC(0) Evacuate Collection Set: 7.0ms +[2023-11-03T14:01:22.789-0400][gc,phases ] GC(0) Post Evacuate Collection Set: 0.4ms +[2023-11-03T14:01:22.789-0400][gc,phases ] GC(0) Other: 1.0ms +[2023-11-03T14:01:22.789-0400][gc,heap ] GC(0) Eden regions: 25->0(21) +[2023-11-03T14:01:22.789-0400][gc,heap ] GC(0) Survivor regions: 0->4(4) +[2023-11-03T14:01:22.789-0400][gc,heap ] GC(0) Old regions: 0->4 +[2023-11-03T14:01:22.789-0400][gc,heap ] GC(0) Archive regions: 2->2 +[2023-11-03T14:01:22.789-0400][gc,heap ] GC(0) Humongous regions: 0->0 +[2023-11-03T14:01:22.789-0400][gc,metaspace] GC(0) Metaspace: 8091K(8320K)->8091K(8320K) NonClass: 7177K(7296K)->7177K(7296K) Class: 914K(1024K)->914K(1024K) +[2023-11-03T14:01:22.789-0400][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 25M->8M(512M) 8.555ms +[2023-11-03T14:01:22.789-0400][gc,cpu ] GC(0) User=0.05s Sys=0.01s Real=0.01s +[2023-11-03T14:29:30.152-0400][gc,heap,exit] Heap +[2023-11-03T14:29:30.152-0400][gc,heap,exit] garbage-first heap total 524288K, used 18100K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T14:29:30.152-0400][gc,heap,exit] region size 1024K, 13 young (13312K), 4 survivors (4096K) +[2023-11-03T14:29:30.152-0400][gc,heap,exit] Metaspace used 8810K, committed 9024K, reserved 1064960K +[2023-11-03T14:29:30.152-0400][gc,heap,exit] class space used 938K, committed 1024K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.4 b/logs/zookeeper-gc.log.4 new file mode 100644 index 0000000..416458c --- /dev/null +++ b/logs/zookeeper-gc.log.4 @@ -0,0 +1,24 @@ +[2023-11-03T15:23:24.500-0400][gc] Using G1 +[2023-11-03T15:23:24.515-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:23:24.515-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:23:24.515-0400][gc,init] Memory: 63941M +[2023-11-03T15:23:24.515-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:23:24.515-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:23:24.515-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:23:24.515-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:23:24.515-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T15:23:24.515-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T15:23:24.515-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T15:23:24.515-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:23:24.515-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:23:24.515-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:23:24.515-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:23:24.515-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:23:24.515-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:23:24.515-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:23:24.515-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:23:24.858-0400][gc,heap,exit] Heap +[2023-11-03T15:23:24.858-0400][gc,heap,exit] garbage-first heap total 524288K, used 15328K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T15:23:24.858-0400][gc,heap,exit] region size 1024K, 14 young (14336K), 0 survivors (0K) +[2023-11-03T15:23:24.858-0400][gc,heap,exit] Metaspace used 5972K, committed 6144K, reserved 1056768K +[2023-11-03T15:23:24.858-0400][gc,heap,exit] class space used 668K, committed 768K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.5 b/logs/zookeeper-gc.log.5 new file mode 100644 index 0000000..d6a4a52 --- /dev/null +++ b/logs/zookeeper-gc.log.5 @@ -0,0 +1,24 @@ +[2023-11-03T15:23:59.140-0400][gc] Using G1 +[2023-11-03T15:23:59.157-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:23:59.157-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:23:59.158-0400][gc,init] Memory: 63941M +[2023-11-03T15:23:59.158-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:23:59.158-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:23:59.158-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:23:59.158-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:23:59.158-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T15:23:59.158-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T15:23:59.158-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T15:23:59.158-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:23:59.158-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:23:59.158-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:23:59.158-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:23:59.158-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:23:59.158-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:23:59.158-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:23:59.158-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:23:59.487-0400][gc,heap,exit] Heap +[2023-11-03T15:23:59.487-0400][gc,heap,exit] garbage-first heap total 524288K, used 15328K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T15:23:59.487-0400][gc,heap,exit] region size 1024K, 14 young (14336K), 0 survivors (0K) +[2023-11-03T15:23:59.487-0400][gc,heap,exit] Metaspace used 5969K, committed 6144K, reserved 1056768K +[2023-11-03T15:23:59.487-0400][gc,heap,exit] class space used 667K, committed 768K, reserved 1048576K diff --git a/logs/zookeeper-gc.log.6 b/logs/zookeeper-gc.log.6 new file mode 100644 index 0000000..297c165 --- /dev/null +++ b/logs/zookeeper-gc.log.6 @@ -0,0 +1,24 @@ +[2023-11-03T15:24:14.100-0400][gc] Using G1 +[2023-11-03T15:24:14.115-0400][gc,init] Version: 17.0.6+10 (release) +[2023-11-03T15:24:14.115-0400][gc,init] CPUs: 12 total, 12 available +[2023-11-03T15:24:14.115-0400][gc,init] Memory: 63941M +[2023-11-03T15:24:14.115-0400][gc,init] Large Page Support: Disabled +[2023-11-03T15:24:14.115-0400][gc,init] NUMA Support: Disabled +[2023-11-03T15:24:14.115-0400][gc,init] Compressed Oops: Enabled (32-bit) +[2023-11-03T15:24:14.115-0400][gc,init] Heap Region Size: 1M +[2023-11-03T15:24:14.115-0400][gc,init] Heap Min Capacity: 512M +[2023-11-03T15:24:14.115-0400][gc,init] Heap Initial Capacity: 512M +[2023-11-03T15:24:14.115-0400][gc,init] Heap Max Capacity: 512M +[2023-11-03T15:24:14.115-0400][gc,init] Pre-touch: Disabled +[2023-11-03T15:24:14.115-0400][gc,init] Parallel Workers: 10 +[2023-11-03T15:24:14.115-0400][gc,init] Concurrent Workers: 3 +[2023-11-03T15:24:14.115-0400][gc,init] Concurrent Refinement Workers: 10 +[2023-11-03T15:24:14.115-0400][gc,init] Periodic GC: Disabled +[2023-11-03T15:24:14.116-0400][gc,metaspace] CDS archive(s) mapped at: [0x0000000800000000-0x0000000800bd5000-0x0000000800bd5000), size 12406784, SharedBaseAddress: 0x0000000800000000, ArchiveRelocationMode: 0. +[2023-11-03T15:24:14.116-0400][gc,metaspace] Compressed class space mapped at: 0x0000000800c00000-0x0000000840c00000, reserved size: 1073741824 +[2023-11-03T15:24:14.116-0400][gc,metaspace] Narrow klass base: 0x0000000800000000, Narrow klass shift: 0, Narrow klass range: 0x100000000 +[2023-11-03T15:24:14.444-0400][gc,heap,exit] Heap +[2023-11-03T15:24:14.444-0400][gc,heap,exit] garbage-first heap total 524288K, used 15328K [0x00000000e0000000, 0x0000000100000000) +[2023-11-03T15:24:14.444-0400][gc,heap,exit] region size 1024K, 14 young (14336K), 0 survivors (0K) +[2023-11-03T15:24:14.444-0400][gc,heap,exit] Metaspace used 5970K, committed 6144K, reserved 1056768K +[2023-11-03T15:24:14.444-0400][gc,heap,exit] class space used 667K, committed 768K, reserved 1048576K diff --git a/site-docs/kafka_2.13-3.6.0-site-docs.tgz b/site-docs/kafka_2.13-3.6.0-site-docs.tgz new file mode 100644 index 0000000..b7c0195 Binary files /dev/null and b/site-docs/kafka_2.13-3.6.0-site-docs.tgz differ