pkgsrc-Changes archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
CVS commit: pkgsrc/devel/kafka
Module Name: pkgsrc
Committed By: fhajny
Date: Thu Apr 5 08:46:37 UTC 2018
Modified Files:
pkgsrc/devel/kafka: Makefile PLIST distinfo
pkgsrc/devel/kafka/patches: patch-bin_connect-distributed.sh
patch-bin_connect-standalone.sh patch-bin_kafka-run-class.sh
patch-bin_kafka-server-stop.sh patch-config_server.properties
Log Message:
devel/kafka: Update to 1.1.0.
New Feature
- automatic migration of log dirs to new locations
- KIP-145 - Expose Record Headers in Kafka Connect
- Add the AdminClient in Streams' KafkaClientSupplier
- Support dynamic updates of frequently updated broker configs
Improvement
- KafkaConnect should support regular expression for topics
- Move kafka-streams test fixtures into a published package
- SSL support for Connect REST API
- Grow default heap settings for distributed Connect from 256M to 1G
- Enable access to key in ValueTransformer
- Add "getAllKeys" API for querying windowed KTable stores
- Unify StreamsKafkaClient instances
- Revisit Streams DSL JavaDocs
- Extend Consumer Group Reset Offset tool for Stream Applications
- KIP-175: ConsumerGroupCommand no longer shows output for consumer
groups which have not committed offsets
- Add a broker metric specifying the number of consumer group
rebalances in progress
- Use Jackson for serialising to JSON
- KafkaShortnamer should allow for case-insensitive matches
- Improve Util classes
- Gradle 3.0+ is needed on the build
- Adding records deletion operation to the new Admin Client API
- Kafka metrics templates used in document generation should maintain
order of tags
- Provide for custom error handling when Kafka Streams fails to
produce
- Make Repartition Topics Transient
- Connect Schema comparison is slow for large schemas
- Add a Validator for NonNull configurations and remove redundant null
checks on lists
- Have State Stores Restore Before Initializing Toplogy
- Optimize condition in if statement to reduce the number of
comparisons
- Removed unnecessary null check
- Introduce Incremental FetchRequests to Increase Partition
Scalability
- SSLTransportLayer should keep reading from socket until either the
buffer is full or the socket has no more data
- Improve KTable Source state store auto-generated names
- Extend consumer offset reset tool to support deletion (KIP-229)
- Expose Kafka cluster ID in Connect REST API
- Maven artifact for kafka should not depend on log4j
- ConsumerGroupCommand should use the new consumer to query the log
end offsets.
- Change LogSegment.delete to deleteIfExists and harden log recovery
- Make ProducerConfig and ConsumerConfig constructors public
- Improve synchronization in CachingKeyValueStore methods
- Improve Kafka GZip compression performance
- Improve JavaDoc of SourceTask#poll() to discourage indefinite
blocking
- Avoid creating dummy checkpoint files with no state stores
- Change log level from ERROR to WARN for not leader for this
partition exception
- Delay initiating the txn on producers until initializeTopology with
EOS turned on
Bug
- change log4j to slf4j
- Use RollingFileAppender by default in log4j.properties
- Cached zkVersion not equal to that in zookeeper, broker not
recovering.
- FileRecords.read doesn't handle size > sizeInBytes when start is not
zero
- a soft failure in controller may leave a topic partition in an
inconsistent state
- Cannot truncate to a negative offset (-1) exception at broker
startup
- automated leader rebalance causes replication downtime for clusters
with too many partitions
- kafka-run-class has potential to add a leading colon to classpath
- QueryableStateIntegrationTest.concurrentAccess is failing
occasionally in jenkins builds
- FileStreamSource Connector not working for large files (~ 1GB)
- KeyValueIterator returns null values
- KafkaProducer is not joining its IO thread properly
- Kafka connect: error with special characters in connector name
- Replace StreamsKafkaClient with AdminClient in Kafka Streams
- LogCleaner#cleanSegments should not ignore failures to delete files
- Connect Rest API allows creating connectors with an empty name -
KIP-212
- KerberosLogin#login should probably be synchronized
- Support replicas movement between log directories (KIP-113)
- Consumer ListOffsets request can starve group heartbeats
- Struct.put() should include the field name if validation fails
- Clarify handling of connector name in config
- Allow user to specify relative path as log directory
- KafkaConsumer should validate topics/TopicPartitions on
subscribe/assign
- Controller should only update reassignment znode if there is change
in the reassignment data
- records.lag should use tags for topic and partition rather than
using metric name.
- KafkaProducer should not wrap InterruptedException in close() with
KafkaException
- Connect classloader isolation may be broken for JDBC drivers
- JsonConverter generates "Mismatching schema" DataException
- NoSuchElementException in markErrorMeter during
TransactionsBounceTest
- Make KafkaFuture.Function java 8 lambda compatible
- ThreadCache#sizeBytes() should check overflow
- KafkaFuture timeout fails to fire if a narrow race condition is hit
- DeleteRecordsRequest to a non-leader
- ReplicaFetcherThread should close the ReplicaFetcherBlockingSend
earlier on shutdown
- NoSuchMethodError when creating ProducerRecord in upgrade system
tests
- Running tools on Windows fail due to typo in JVM config
- Streams metrics tagged incorrectly
- ducker-ak: add ipaddress and enum34 dependencies to docker image
- Kafka cannot recover after an unclean shutdown on Windows
- Scanning plugin.path needs to support relative symlinks
- Reconnecting to broker does not exponentially backoff
- TaskManager should be type aware
- Major performance issue due to excessive logging during leader
election
- RecordCollectorImpl should not retry sending
- Restore and global consumer should not use auto.offset.reset
- Global Consumer should handle TimeoutException
- Reduce rebalance time by not checking if created topics are
available
- VerifiableConsumer with --max-messages doesn't exit
- Transaction markers are sometimes discarded if txns complete
concurrently
- Simplify StreamsBuilder#addGlobalStore
- JmxReporter can't handle windows style directory paths
- CONSUMER-ID and HOST values are concatenated if the CONSUMER-ID is >
50 chars
- ClientQuotaManager threads prevent shutdown when encountering an
error loading logs
- Streams configuration requires consumer. and producer. in order to
be read
- Timestamp on streams directory contains a colon, which is an illegal
character
- Add methods in Options classes to keep binary compatibility with
0.11
- RecordQueue.clear() does not clear MinTimestampTracker's maintained
list
- Selector memory leak with high likelihood of OOM in case of down
conversion
- GlobalKTable never finishes restoring when consuming transactional
messages
- Server crash while deleting segments
- IllegalArgumentException if 1.0.0 is used for
inter.broker.protocol.version or log.message.format.version
- Using standby replicas with an in memory state store causes Streams
to crash
- Issues with protocol version when applying a rolling upgrade to
1.0.0
- Fix system test dependency issues
- Kafka Connect requires permission to create internal topics even if
they exist
- A metric named 'XX' already exists, can't register another one.
- Improve sink connector topic regex validation
- Flaky Unit test:
KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegionWithNonZeroByteCache
- Make KafkaStreams.cleanup() clean global state directory
- AbstractCoordinator not clearly handles NULL Exception
- Request logging throws exception if acks=0
- GlobalKTable missing #queryableStoreName()
- KTable state restore fails after rebalance
- Make loadClass thread-safe for class loaders of Connect plugins
- System Test failed: ConnectRestApiTest
- Broken symlink interrupts scanning the plugin path
- NetworkClient should not return internal failed api version
responses from poll
- Transient failure in
NetworkClientTest.testConnectionDelayDisconnected
- Line numbers on log messages are incorrect
- Topic can not be recreated after it is deleted
- mBeanName should be removed before returning from
JmxReporter#removeAttribute()
- Kafka Core should have explicit SLF4J API dependency
- StreamsResetter should return non-zero return code on error
- kafka-acls regression for comma characters (and maybe other
characters as well)
- Error deleting log for topic, all log dirs failed.
- punctuate with WALL_CLOCK_TIME triggered immediately
- Exclude node groups belonging to global stores in
InternalTopologyBuilder#makeNodeGroups
- Transient failure in
kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpointkafka.api.AdminClientIntegrationTest.testAlterReplicaLogDirs
- NetworkClient.inFlightRequestCount() is not thread safe, causing
ConcurrentModificationExceptions when sensors are read
- ConcurrentModificationException during streams state restoration
- Update KStream JavaDoc with regard to KIP-182
- RocksDB segments not removed when store is closed causes
re-initialization to fail
- auto commit not work since coordinatorUnknown() is always true.
- Fix StateRestoreListener To Use Correct Batch Ending Offset
- NullPointerException on KStream-GlobalKTable leftJoin when
KeyValueMapper returns null
- StreamThread.shutdown doesn't clean up completely when called before
StreamThread.start
- Update ZooKeeper to 3.4.11, Gradle and other minor updates
- output from ensure copartitioning is not used for Cluster metadata,
resulting in partitions without tasks working on them
- Consumer should not block setting initial positions of unavailable
partitions
- Non-aggregation KTable generation operator does not construct value
getter correctly
- AdminClient should handle empty or null topic names better
- When enable trace level log in mirror maker, it will throw null
pointer exception and the mirror maker will shutdown
- Simplify KStreamReduce
- Base64URL encoding under JRE 1.7 is broken due to incorrect padding
assumption
- ChangeLoggingKeyValueBytesStore.all() returns null
- Fetcher.retrieveOffsetsByTimes() should add all the topics to the
metadata refresh topics set.
- LogSemgent.truncateTo() should always resize the index file
- Connect: Plugin scan is very slow
- Connect: Some per-task-metrics not working
- Connect header parser incorrectly parses arrays
- Java Producer: Excessive memory usage with compression enabled
- New Connect header support doesn't define 'converter.type' property
correctly
- ZooKeeperClient holds a lock while waiting for responses, blocking
shutdown
- Transient failure in
DynamicBrokerReconfigurationTest.testThreadPoolResize
- Broker leaks memory and file descriptors after sudden client
disconnects
- Delegation token internals should not impact public interfaces
- Streams quickstart pom.xml is missing versions for a bunch of
plugins
- Deadlock while processing Controller Events
- Broker doesn't reject Produce request with inconsistent state
- LogCleanerManager.doneDeleting() should check the partition state
before deleting the in progress partition
- KafkaController.brokerInfo not updated on dynamic update
- Connect standalone SASL file source and sink test fails without
explanation
- Connect distributed and standalone worker 'main()' methods should
catch and log all exceptions
- Consumer bytes-fetched and records-fetched metrics are not
aggregated correctly
- Coordinator disconnect in heartbeat thread can cause commitSync to
block indefinitely
- Regression in consumer auto-commit backoff behavior
- GroupMetadataManager.loadGroupsAndOffsets decompresses record batch
needlessly
- log segment deletion could cause a disk to be marked offline
incorrectly
- Delayed operations may not be completed when there is lock
contention
- Expression for GlobalKTable is not correct
- System tests do not handle ZK chroot properly with SCRAM
- Fix config initialization in DynamicBrokerConfig
- ReplicaFetcher crashes with "Attempted to complete a transaction
which was not started"
Test
- Add concurrent tests to exercise all paths in group/transaction
managers
- Add unit tests for ClusterConnectionStates
- Only delete reassign_partitions znode after reassignment is complete
- KafkaStreamsTest fails in trunk
- SelectorTest may fail with ConcurrentModificationException
Sub-task
- Add capability to create delegation token
- Add authentication based on delegation token.
- Add capability to renew/expire delegation tokens.
- always leave the last surviving member of the ISR in ZK
- handle ZK session expiration properly when a new session can't be
established
- Streams should not re-throw if suspending/closing tasks fails
- Use async ZookeeperClient in Controller
- Use async ZookeeperClient in SimpleAclAuthorizer
- Use async ZookeeperClient for DynamicConfigManager
- Use async ZookeeperClient for Admin operations
- Trogdor should handle injecting disk faults
- Add process stop faults, round trip workload, partitioned
produce-consume test
- add the notion of max inflight requests to async ZookeeperClient
- Add workload generation capabilities to Trogdor
- Add ZooKeeperRequestLatencyMs to KafkaZkClient
- Use ZookeeperClient in LogManager
- Use ZookeeperClient in GroupCoordinator and TransactionCoordinator
- Use ZookeeperClient in KafkaApis
- Use ZookeeperClient in ReplicaManager and Partition
- Tests for KafkaZkClient
- Transient failure in
kafka.api.SaslScramSslEndToEndAuthorizationTest.testTwoConsumersWithDifferentSaslCredentials
- minimize the number of triggers enqueuing
PreferredReplicaLeaderElection events
- Enable dynamic reconfiguration of SSL keystores
- Enable resizing various broker thread pools
- Enable reconfiguration of metrics reporters and their custom configs
- Enable dynamic reconfiguration of log cleaners
- Enable reconfiguration of default topic configs used by brokers
- Enable reconfiguration of listeners and security configs
- Add ProduceBench to Trogdor
- move ZK metrics in KafkaHealthCheck to ZookeeperClient
- Add documentation for delegation token authentication mechanism
- Document dynamic config update
- Extend ConfigCommand to update broker config using new AdminClient
- Add test to verify markPartitionsForTruncation after fetcher thread
pool resize
To generate a diff of this commit:
cvs rdiff -u -r1.5 -r1.6 pkgsrc/devel/kafka/Makefile \
pkgsrc/devel/kafka/distinfo
cvs rdiff -u -r1.4 -r1.5 pkgsrc/devel/kafka/PLIST
cvs rdiff -u -r1.1 -r1.2 \
pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh \
pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh \
pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh \
pkgsrc/devel/kafka/patches/patch-config_server.properties
cvs rdiff -u -r1.2 -r1.3 \
pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh
Please note that diffs are not public domain; they are subject to the
copyright notices on the relevant files.
Modified files:
Index: pkgsrc/devel/kafka/Makefile
diff -u pkgsrc/devel/kafka/Makefile:1.5 pkgsrc/devel/kafka/Makefile:1.6
--- pkgsrc/devel/kafka/Makefile:1.5 Wed Mar 7 11:50:57 2018
+++ pkgsrc/devel/kafka/Makefile Thu Apr 5 08:46:37 2018
@@ -1,6 +1,6 @@
-# $NetBSD: Makefile,v 1.5 2018/03/07 11:50:57 fhajny Exp $
+# $NetBSD: Makefile,v 1.6 2018/04/05 08:46:37 fhajny Exp $
-DISTNAME= kafka_${SCALA_VERSION}-1.0.1
+DISTNAME= kafka_${SCALA_VERSION}-1.1.0
PKGNAME= ${DISTNAME:S/_${SCALA_VERSION}//}
CATEGORIES= net
MASTER_SITES= ${MASTER_SITE_APACHE:=kafka/${PKGVERSION_NOREV}/}
Index: pkgsrc/devel/kafka/distinfo
diff -u pkgsrc/devel/kafka/distinfo:1.5 pkgsrc/devel/kafka/distinfo:1.6
--- pkgsrc/devel/kafka/distinfo:1.5 Wed Mar 7 11:50:57 2018
+++ pkgsrc/devel/kafka/distinfo Thu Apr 5 08:46:37 2018
@@ -1,12 +1,12 @@
-$NetBSD: distinfo,v 1.5 2018/03/07 11:50:57 fhajny Exp $
+$NetBSD: distinfo,v 1.6 2018/04/05 08:46:37 fhajny Exp $
-SHA1 (kafka_2.12-1.0.1.tgz) = f24f467be4baa07690214ad6995dc8e62e40b86c
-RMD160 (kafka_2.12-1.0.1.tgz) = cfbc0cfd056d9cd28be6c9c566905c1cef4282d4
-SHA512 (kafka_2.12-1.0.1.tgz) = 935c0df1cf742405c40d9248cfdd1578038b595b59ec5a350543a7fe67b6be26ff6c4426f7c0c072ff4aa006b701502a55fcf7e2ced1fdc64330e3383035078c
-Size (kafka_2.12-1.0.1.tgz) = 44474706 bytes
-SHA1 (patch-bin_connect-distributed.sh) = 49e679e9d9d355921054a3723c96e519af871173
-SHA1 (patch-bin_connect-standalone.sh) = 9c300a771dbf02f733edd2b8a8cdf9f62c9322b4
-SHA1 (patch-bin_kafka-run-class.sh) = afba1d71d7bc40762068e27bda3c77159f7eb37c
+SHA1 (kafka_2.12-1.1.0.tgz) = 057cc111d354d2c20f0125d8e43b44682c69b381
+RMD160 (kafka_2.12-1.1.0.tgz) = 2e92b496a2c4d317d740cfa9d287790ac6d9ef96
+SHA512 (kafka_2.12-1.1.0.tgz) = 48d1ddc71f5a5b1b25d111f792553be69be62293640a3c6af985203c6ee88c6aa78e01327066bfad3feae6b0b45d71c0cac6ebd2d08843d92269132741a3791b
+Size (kafka_2.12-1.1.0.tgz) = 50326212 bytes
+SHA1 (patch-bin_connect-distributed.sh) = 3d530501d50a850dbb52da8c08cb89c7c8407d50
+SHA1 (patch-bin_connect-standalone.sh) = 5a00efdf616f761565579741babce3efc6d3a026
+SHA1 (patch-bin_kafka-run-class.sh) = 3b8111d2c184327c1277a60f6e554527cb936bab
SHA1 (patch-bin_kafka-server-start.sh) = 2e93ef575af6738e4af5555b395a10092e6de933
-SHA1 (patch-bin_kafka-server-stop.sh) = 6121f1a519aff541d018f0e57946a872f4253fdf
-SHA1 (patch-config_server.properties) = 1bdb5bfded1325a3a5afe070dc3081f6b24185a8
+SHA1 (patch-bin_kafka-server-stop.sh) = 1c690bed0b91ec9de7c551936a04ad5b97d394b1
+SHA1 (patch-config_server.properties) = 23c9e1b58a3ddf0a3dc4d5d082d0f90f8a65ba0d
Index: pkgsrc/devel/kafka/PLIST
diff -u pkgsrc/devel/kafka/PLIST:1.4 pkgsrc/devel/kafka/PLIST:1.5
--- pkgsrc/devel/kafka/PLIST:1.4 Wed Mar 7 11:50:57 2018
+++ pkgsrc/devel/kafka/PLIST Thu Apr 5 08:46:37 2018
@@ -1,4 +1,4 @@
-@comment $NetBSD: PLIST,v 1.4 2018/03/07 11:50:57 fhajny Exp $
+@comment $NetBSD: PLIST,v 1.5 2018/04/05 08:46:37 fhajny Exp $
bin/connect-distributed.sh
bin/connect-standalone.sh
bin/kafka-acls.sh
@@ -8,6 +8,7 @@ bin/kafka-console-consumer.sh
bin/kafka-console-producer.sh
bin/kafka-consumer-groups.sh
bin/kafka-consumer-perf-test.sh
+bin/kafka-delegation-tokens.sh
bin/kafka-delete-records.sh
bin/kafka-log-dirs.sh
bin/kafka-mirror-maker.sh
@@ -36,12 +37,12 @@ lib/java/kafka/libs/guava-20.0.jar
lib/java/kafka/libs/hk2-api-2.5.0-b32.jar
lib/java/kafka/libs/hk2-locator-2.5.0-b32.jar
lib/java/kafka/libs/hk2-utils-2.5.0-b32.jar
-lib/java/kafka/libs/jackson-annotations-2.9.1.jar
-lib/java/kafka/libs/jackson-core-2.9.1.jar
-lib/java/kafka/libs/jackson-databind-2.9.1.jar
-lib/java/kafka/libs/jackson-jaxrs-base-2.9.1.jar
-lib/java/kafka/libs/jackson-jaxrs-json-provider-2.9.1.jar
-lib/java/kafka/libs/jackson-module-jaxb-annotations-2.9.1.jar
+lib/java/kafka/libs/jackson-annotations-2.9.4.jar
+lib/java/kafka/libs/jackson-core-2.9.4.jar
+lib/java/kafka/libs/jackson-databind-2.9.4.jar
+lib/java/kafka/libs/jackson-jaxrs-base-2.9.4.jar
+lib/java/kafka/libs/jackson-jaxrs-json-provider-2.9.4.jar
+lib/java/kafka/libs/jackson-module-jaxb-annotations-2.9.4.jar
lib/java/kafka/libs/javassist-3.20.0-GA.jar
lib/java/kafka/libs/javassist-3.21.0-GA.jar
lib/java/kafka/libs/javax.annotation-api-1.2.jar
@@ -56,19 +57,21 @@ lib/java/kafka/libs/jersey-container-ser
lib/java/kafka/libs/jersey-guava-2.25.1.jar
lib/java/kafka/libs/jersey-media-jaxb-2.25.1.jar
lib/java/kafka/libs/jersey-server-2.25.1.jar
-lib/java/kafka/libs/jetty-continuation-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-http-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-io-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-security-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-server-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-servlet-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-servlets-9.2.22.v20170606.jar
-lib/java/kafka/libs/jetty-util-9.2.22.v20170606.jar
+lib/java/kafka/libs/jetty-client-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-continuation-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-http-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-io-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-security-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-server-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-servlet-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-servlets-9.2.24.v20180105.jar
+lib/java/kafka/libs/jetty-util-9.2.24.v20180105.jar
lib/java/kafka/libs/jopt-simple-5.0.4.jar
lib/java/kafka/libs/kafka-clients-${PKGVERSION}.jar
lib/java/kafka/libs/kafka-log4j-appender-${PKGVERSION}.jar
lib/java/kafka/libs/kafka-streams-${PKGVERSION}.jar
lib/java/kafka/libs/kafka-streams-examples-${PKGVERSION}.jar
+lib/java/kafka/libs/kafka-streams-test-utils-${PKGVERSION}.jar
lib/java/kafka/libs/kafka-tools-${PKGVERSION}.jar
lib/java/kafka/libs/kafka_2.12-${PKGVERSION}-javadoc.jar
lib/java/kafka/libs/kafka_2.12-${PKGVERSION}-scaladoc.jar
@@ -78,17 +81,19 @@ lib/java/kafka/libs/kafka_2.12-${PKGVERS
lib/java/kafka/libs/kafka_2.12-${PKGVERSION}.jar
lib/java/kafka/libs/log4j-1.2.17.jar
lib/java/kafka/libs/lz4-java-1.4.jar
-lib/java/kafka/libs/maven-artifact-3.5.0.jar
+lib/java/kafka/libs/maven-artifact-3.5.2.jar
lib/java/kafka/libs/metrics-core-2.2.0.jar
lib/java/kafka/libs/osgi-resource-locator-1.0.1.jar
-lib/java/kafka/libs/plexus-utils-3.0.24.jar
+lib/java/kafka/libs/plexus-utils-3.1.0.jar
lib/java/kafka/libs/reflections-0.9.11.jar
lib/java/kafka/libs/rocksdbjni-5.7.3.jar
lib/java/kafka/libs/scala-library-2.12.4.jar
+lib/java/kafka/libs/scala-logging_2.12-3.7.2.jar
+lib/java/kafka/libs/scala-reflect-2.12.4.jar
lib/java/kafka/libs/slf4j-api-1.7.25.jar
lib/java/kafka/libs/slf4j-log4j12-1.7.25.jar
-lib/java/kafka/libs/snappy-java-1.1.4.jar
-lib/java/kafka/libs/validation-api-1.1.0.Final.jar
+lib/java/kafka/libs/snappy-java-1.1.7.1.jar
+lib/java/kafka/libs/validation-api-${PKGVERSION}.Final.jar
lib/java/kafka/libs/zkclient-0.10.jar
lib/java/kafka/libs/zookeeper-3.4.10.jar
share/examples/kafka/connect-console-sink.properties
Index: pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh
diff -u pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh:1.1 pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh:1.2
--- pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh:1.1 Tue Feb 28 08:17:28 2017
+++ pkgsrc/devel/kafka/patches/patch-bin_connect-distributed.sh Thu Apr 5 08:46:37 2018
@@ -1,8 +1,8 @@
-$NetBSD: patch-bin_connect-distributed.sh,v 1.1 2017/02/28 08:17:28 fhajny Exp $
+$NetBSD: patch-bin_connect-distributed.sh,v 1.2 2018/04/05 08:46:37 fhajny Exp $
Paths.
---- bin/connect-distributed.sh.orig 2017-02-14 17:26:08.000000000 +0000
+--- bin/connect-distributed.sh.orig 2018-03-23 22:51:56.000000000 +0000
+++ bin/connect-distributed.sh
@@ -23,7 +23,7 @@ fi
base_dir=$(dirname $0)
@@ -12,4 +12,4 @@ Paths.
+ export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:@PKG_SYSCONFDIR@/connect-log4j.properties"
fi
- EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed'}
+ if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
Index: pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh
diff -u pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh:1.1 pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh:1.2
--- pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh:1.1 Tue Feb 28 08:17:28 2017
+++ pkgsrc/devel/kafka/patches/patch-bin_connect-standalone.sh Thu Apr 5 08:46:37 2018
@@ -1,8 +1,8 @@
-$NetBSD: patch-bin_connect-standalone.sh,v 1.1 2017/02/28 08:17:28 fhajny Exp $
+$NetBSD: patch-bin_connect-standalone.sh,v 1.2 2018/04/05 08:46:37 fhajny Exp $
Paths.
---- bin/connect-standalone.sh.orig 2017-02-14 17:26:08.000000000 +0000
+--- bin/connect-standalone.sh.orig 2018-03-23 22:51:56.000000000 +0000
+++ bin/connect-standalone.sh
@@ -23,7 +23,7 @@ fi
base_dir=$(dirname $0)
@@ -12,4 +12,4 @@ Paths.
+ export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:@PKG_SYSCONFDIR@/connect-log4j.properties"
fi
- EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone'}
+ if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
Index: pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh
diff -u pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh:1.1 pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh:1.2
--- pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh:1.1 Tue Feb 28 08:17:28 2017
+++ pkgsrc/devel/kafka/patches/patch-bin_kafka-server-stop.sh Thu Apr 5 08:46:37 2018
@@ -1,13 +1,13 @@
-$NetBSD: patch-bin_kafka-server-stop.sh,v 1.1 2017/02/28 08:17:28 fhajny Exp $
+$NetBSD: patch-bin_kafka-server-stop.sh,v 1.2 2018/04/05 08:46:37 fhajny Exp $
More columns to make grep match.
---- bin/kafka-server-stop.sh.orig 2017-02-14 17:26:07.000000000 +0000
+--- bin/kafka-server-stop.sh.orig 2018-03-23 22:51:56.000000000 +0000
+++ bin/kafka-server-stop.sh
-@@ -13,7 +13,7 @@
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+ SIGNAL=${SIGNAL:-TERM}
-PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
+PIDS=$(ps axwww | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
Index: pkgsrc/devel/kafka/patches/patch-config_server.properties
diff -u pkgsrc/devel/kafka/patches/patch-config_server.properties:1.1 pkgsrc/devel/kafka/patches/patch-config_server.properties:1.2
--- pkgsrc/devel/kafka/patches/patch-config_server.properties:1.1 Tue Feb 28 08:17:28 2017
+++ pkgsrc/devel/kafka/patches/patch-config_server.properties Thu Apr 5 08:46:37 2018
@@ -1,13 +1,13 @@
-$NetBSD: patch-config_server.properties,v 1.1 2017/02/28 08:17:28 fhajny Exp $
+$NetBSD: patch-config_server.properties,v 1.2 2018/04/05 08:46:37 fhajny Exp $
Paths.
---- config/server.properties.orig 2015-02-26 22:12:06.000000000 +0000
+--- config/server.properties.orig 2018-03-23 22:51:56.000000000 +0000
+++ config/server.properties
-@@ -55,7 +55,7 @@ socket.request.max.bytes=104857600
+@@ -57,7 +57,7 @@ socket.request.max.bytes=104857600
############################# Log Basics #############################
- # A comma seperated list of directories under which to store log files
+ # A comma separated list of directories under which to store log files
-log.dirs=/tmp/kafka-logs
+log.dirs=@KAFKA_DATADIR@
Index: pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh
diff -u pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh:1.2 pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh:1.3
--- pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh:1.2 Tue Jul 4 14:14:46 2017
+++ pkgsrc/devel/kafka/patches/patch-bin_kafka-run-class.sh Thu Apr 5 08:46:37 2018
@@ -1,8 +1,8 @@
-$NetBSD: patch-bin_kafka-run-class.sh,v 1.2 2017/07/04 14:14:46 fhajny Exp $
+$NetBSD: patch-bin_kafka-run-class.sh,v 1.3 2018/04/05 08:46:37 fhajny Exp $
Paths.
---- bin/kafka-run-class.sh.orig 2017-06-22 22:06:18.000000000 +0000
+--- bin/kafka-run-class.sh.orig 2018-03-23 22:51:56.000000000 +0000
+++ bin/kafka-run-class.sh
@@ -20,6 +20,10 @@ then
exit 1
@@ -15,7 +15,7 @@ Paths.
# CYGINW == 1 if Cygwin is detected, else 0.
if [[ $(uname -a) =~ "CYGWIN" ]]; then
CYGWIN=1
-@@ -55,84 +59,7 @@ if [ -z "$SCALA_BINARY_VERSION" ]; then
+@@ -55,80 +59,7 @@ if [ -z "$SCALA_BINARY_VERSION" ]; then
SCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d '.')
fi
@@ -23,11 +23,7 @@ Paths.
-shopt -s nullglob
-for dir in "$base_dir"/core/build/dependant-libs-${SCALA_VERSION}*;
-do
-- if [ -z "$CLASSPATH" ] ; then
-- CLASSPATH="$dir/*"
-- else
-- CLASSPATH="$CLASSPATH:$dir/*"
-- fi
+- CLASSPATH="$CLASSPATH:$dir/*"
-done
-
-for file in "$base_dir"/examples/build/libs/kafka-examples*.jar;
@@ -101,7 +97,7 @@ Paths.
do
if should_include_file "$file"; then
CLASSPATH="$CLASSPATH":"$file"
-@@ -152,13 +79,13 @@ fi
+@@ -153,13 +84,13 @@ fi
# Log directory to use
if [ "x$LOG_DIR" = "x" ]; then
Home |
Main Index |
Thread Index |
Old Index