Same messages in a topic are getting read by consumer group repeatedly - java

I have created topic with 10 partitions and replication factor is 2.And I have created consumer group with some 'xyz' group name and concurrency set 10, so that 10 consumers can start consuming from 10 partitions parallelly i,e each partition can be consumed by each consumer but some how same messages are getting read by consumer group again and again, below my code, please let me know what I am doing wrong,
KafkaTopicConfig:
#Configuration
public class KafkaTopicConfig {
#Value(value = "${kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic() {
return new NewTopic("someTopicName", 10, (short) 2);
}
}
KafkaProducerConfig:
#Configuration
public class KafkaProducerConfig {
#Value(value = "${kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Event> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, Event> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
KafkaConsumerConfig :
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ConsumerFactory<String, Event> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
config.put(ConsumerConfig.GROUP_ID_CONFIG, "someGroupId");
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
config.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
new JsonDeserializer<>(Event.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Event> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(500);
return factory;
}
}
Listener:
#KafkaListener(topics = "someTopicName", groupId = "someGroupId")
public void consume(Event event) {
// business logic
}
Please let me know any other information needed and also let me know what I am doing wrong

Related

Spring Kafka Listener, put message back on queue if throws exception

I have a Spring project using Kafka as well.
My issue is that whenever I get a message from Kafka I'm handling the message, Acknowledge the message and go on to the next offset.
But if my handling is catching an exception the Acknowledgement doesn't work.
I want to be able to acknowledge it, put it back to end of the queue, and move on to the next message.
Is there a good way to do this?
This is my kafka config class
#Configuration
#EnableKafka
public class KafkaConfig {
public static final String TOPIC = "FirstPrioQueue";
public static final String TOPIC_RETRY_FAILED = "FailedQueue";
private static final String GROUP_ID = "GroupId";
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(final ConsumerFactory<String, String> consumerFactory,
final RetryTemplate retryTemplate) {
final ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setConcurrency(1);
factory.setRetryTemplate(retryTemplate);
factory.getContainerProperties()
.setAckMode(AckMode.MANUAL_IMMEDIATE);
return factory;
}
#Bean
public RetryTemplate retryTemplate() {
final FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(TimeUnit.SECONDS.toMillis(10));
final RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(new AlwaysRetryPolicy());
return retryTemplate;
}
#Bean
public ConsumerFactory<String, String> consumerFactory(final ServiceLocator serviceLocator) {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(serviceLocator));
}
#Bean
public Map<String, Object> consumerConfigs(final ServiceLocator serviceLocator) {
final String brokers = ServiceConverter.of(serviceLocator)
.commaSeparatedAddresses(Services.KAFKA).get();
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
#Bean
public KafkaProducer kafkaProducer(final ServiceLocator locator) {
final KafkaProducerConfig config = KafkaProducerConfig.builder()
.brokerAddresses(ServiceConverter.of(locator).commaSeparatedAddresses(Services.KAFKA).get())
.sendDelay(3)
.build();
return new KafkaProducer(config);
}
}
And this is my Listeners
#KafkaListener(topicPattern = KafkaConfig.TOPIC)
public void listen(final ConsumerRecord<String, String> record, final Acknowledgment acknowledgment) {
handleRecord(record, acknowledgment);
}
#KafkaListener(topicPattern = KafkaConfig.TOPIC_RETRY_FAILED_CLICK)
public void listenOnFailedClicksQueue(final ConsumerRecord<String, String> record, final Acknowledgment acknowledgment) {
handleRecord(record, acknowledgment);
}
private void handleRecord(final ConsumerRecord<String, String> record, final Acknowledgment acknowledgment) {
try {
doSomethingThatMayCauseException();
acknowledgment.acknowledge();
}
catch (final RuntimeException e) {
tryToPutFailedClickInQueue(event);
acknowledgment.acknowledge();
throw e;
}
}
private void tryToPutFailedClickInQueue(final VersionedEventClickout event) {
final Optional<KafkaProducer.Message> message = getMessage(event);
if (message.isPresent()) {
LOGGER.info("Putting clickout in failed clicks kafka queue");
producer.write(message.get());
}
else {
LOGGER.warn("Failed to put click into the failed clicks kafka queue!");
}
}
private Optional<KafkaProducer.Message> getMessage(final VersionedEventClickout versionedEventClickout) {
if (versionedEventClickout == null) {
return Optional.empty();
}
return Optional.of(KafkaProducer.Message.builder()
.topic(TOPIC_RETRY_FAILED_CLICK)
.payload(JsonMapper.toJson((versionedEventClickout)))
.key(String.valueOf(versionedEventClickout.getData().getMerchantId()) + '_' + Objects.hashCode(versionedEventClickout.getData().getMerchantProductSku()))
.build());
}

Getting- Magic v1 does not support record headers, when producing message

Getting Magic v1 does not support record headers, while producing message,Below my code,
KafkaProducerConfig:
#Configuration
public class KafkaProducerConfig {
#Value(value = "${kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Event> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
config.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, Event> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
KafkaConsumerConfig:
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${spring.kafka.consumer.group-id}")
private String groupId;
#Value(value = "${kafka.consumer.enable.auto.commit}")
private String autoCommit;
#Value(value = "${kafka.consumer.auto.commit.interval.ms}")
private String autoCommitInterval;
#Value(value = "${kafka.consumer.auto.offset.reset}")
private String autoOffsetReset;
#Value(value = "${kafka.consumer.session.timeout.ms}")
private String sessionTimeout;
#Value(value = "${kafka.consumer.concurrency}")
private String concurrency;
#Value(value = "${kafka.consumer.pollTimeout}")
private String pollTimeout;
#Bean
public ConsumerFactory<String, Event> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
config.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, autoCommit);
config.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
new JsonDeserializer<>(Event.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Event> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(Integer.parseInt(concurrency));
factory.getContainerProperties().setPollTimeout(Integer.parseInt(pollTimeout));
return factory;
}
}
KafkaTopicConfig:
#Configuration
public class KafkaTopicConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${kafka.topicName}")
private String topicName;
#Value(value = "${kafka.topic.partitions}")
private String partitions;
#Value(value = "${kafka.topic.replicationFactor}")
private String replicationFactor;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic ClientTopic() {
return new NewTopic(topicName, Integer.parseInt(partitions), (short) Short.parseShort(replicationFactor));
}
}
producing message: Getting error when producing message
kafkaTemplate.send(topicName, event);
Consuming message:
#KafkaListener(topics = "someTopicName", groupId = "somegroupId")
public void consume(Event event) {
// here Business logic
}
Gradle dependecies which I am using:
implementation ('org.springframework.kafka:spring-kafka')
implementation('com.fasterxml.jackson.core:jackson-databind:2.9.4')
Spring boot version which I am using:
springBootVersion = '2.0.3.RELEASE'
Please let me know what I am doing wrong
I have tried adding in producer factory but didn't work,
config.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);

How to read multiple types of json from one topic in kafka springboot

I have one topic from which I can receive different types of jsons. However, it appears that I'm getting an exception while the consumer tries to read the message. I tried to add additional bean names but that didn't work. It seems that its trying reading from the topic and trying to convert to all the types that are reading from the topic. Is there a way to specify that only a particular factory should be enabled for a particular input type. Is there any other way to fix the issue.
ERROR
Caused by:
org.springframework.messaging.converter.MessageConversionException:
Cannot convert from
[com.lte.assessment.assessments.AssessmentAttemptRequest] to
[com.lte.assessmentanalytics.data.SiteLevelAnalyticsRequest] for
GenericMessage
[payload=com.lte.assessment.assessments.AssessmentAttemptRequest#68eb637f,
headers={kafka_offset=22,
kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#252d8ffb,
kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null,
kafka_receivedPartitionId=0, kafka_receivedTopic=ltetopic,
kafka_receivedTimestamp=1546117529267}
Config
#EnableKafka
#Configuration
public class KafkaConfig {
static Map<String, Object> config = new HashMap();
static {
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
}
#Bean
public ConsumerFactory<String, AssessmentAttemptRequest> assessmentAttemptDetailsEntityConsumerFactory() {
JsonDeserializer<AssessmentAttemptRequest> deserializer = new JsonDeserializer<>();
deserializer.addTrustedPackages("com.lte.assessment.assessments");
return new DefaultKafkaConsumerFactory(config, new StringDeserializer(), deserializer);
}
#Bean(name="aaKafkaListenerFactory")
public ConcurrentKafkaListenerContainerFactory aaKafkaListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, AssessmentAttemptRequest> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(assessmentAttemptDetailsEntityConsumerFactory());
return factory;
}
#Bean
public ConsumerFactory<String, AssessmentQuestionAnalyticsEntity> assessmentQuestionAnalyticssEntityConsumerFactory() {
JsonDeserializer<AssessmentQuestionAnalyticsEntity> deserializer = new JsonDeserializer<>();
deserializer.addTrustedPackages("com.lte.assessment.assessments");
return new DefaultKafkaConsumerFactory(config, new StringDeserializer(), deserializer);
}
#Bean(name="aqKafkaListenerFactory")
public ConcurrentKafkaListenerContainerFactory aqKafkaListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, AssessmentQuestionAnalyticsEntity> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(assessmentQuestionAnalyticssEntityConsumerFactory());
return factory;
}
#Bean
public ConsumerFactory<String, SiteLevelAnalyticsEntity> siteLevelAnalyticsEntityConsumerFactory() {
JsonDeserializer<SiteLevelAnalyticsEntity> deserializer = new JsonDeserializer<>();
deserializer.addTrustedPackages("com.lte.assessment.assessments");
return new DefaultKafkaConsumerFactory(config, new StringDeserializer(), deserializer);
}
#Bean("slaKafkaListenerFactory")
public ConcurrentKafkaListenerContainerFactory slaKafkaListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, SiteLevelAnalyticsEntity> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(siteLevelAnalyticsEntityConsumerFactory());
return factory;
}
}
Service
#Service
public class TopicObserver implements
ConsumerSeekAware.ConsumerSeekCallback,ConsumerSeekAware{
#Autowired
private AssessmentAttemptService assessmentAttemptService;
#Autowired
private AssessmentQuestionService assessmentQuestionService;
#Autowired
private SiteLevelAnalyticsService siteLevelAnalyticsService;
private final ThreadLocal<ConsumerSeekCallback> seekCallBack = new ThreadLocal<>();
#KafkaListener(topics = "ltetopic", groupId = "group_id", containerFactory = "aaKafkaListenerFactory")
public void consumeAttemptDetails(AssessmentAttemptRequest request) {
assessmentAttemptService.storeAttempDetails(request);
}
#KafkaListener(topics = "ltetopic", groupId = "group_id", containerFactory = "aqKafkaListenerFactory")
public void setAssessmentQeustionAnalytics(AssessmentQuestionRequest request) {
assessmentQuestionService.storeQuestionDetails(request);
}
#KafkaListener(topics = "ltetopic", groupId = "group_id", containerFactory = "slaKafkaListenerFactory")
public void siteLevelAnalytics(SiteLevelAnalyticsRequest request) {
siteLevelAnalyticsService.storeSiteLevelDetailsDetails(request);
}
}
#Bean
public ConsumerFactory<String, SiteLevelAnalyticsEntity> siteLevelAnalyticsEntityConsumerFactory() {
JsonDeserializer<SiteLevelAnalyticsEntity> deserializer = new JsonDeserializer<>();
deserializer.addTrustedPackages("com.lte.assessment.assessments");
return new DefaultKafkaConsumerFactory(config, new StringDeserializer(), deserializer);
}
in consumer factory you assign SiteLevelAnalyticsEntity and JsonDeserializer is assessments package .
pls define --deserializer.addTrustedPackages("com.lte.assessment.SiteLevelAnalyticsEntity");
#Deadpool is right. If you need a simpler solution, consume your messages as a String JSON payload and manually deserialize them to your objects.
#Bean
public ConsumerFactory<Integer, String> createConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(createConsumerFactory());
return factory;
}
In your Listener, consume as String.
#KafkaListener(id = "foo", topics = YOUR_TOPIC)
public void listen(String json){
//Convert to Object here.
}

Reading String from redis using spring boot

I have set one key in redis using redis-as below
redis 127.0.0.1:6379> set 100.vo.t1 '{"foo": "bar", "ans": 42}'
OK
redis 127.0.0.1:6379> get 100.vo.t1
"{\"foo\": \"bar\", \"ans\": 42}"
But now i am trying to read the same usin Spring boot and Jedis but the value is coming as null
Rrepository
#Repository
public class TemplateRepositoryImpl implements TemplateRepository {
private ValueOperations<String, Object> valueOperations;
private RedisTemplate<String, Object> redisTemplate;
#Autowired
public TemplateRepositoryImpl(RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#PostConstruct
private void init() {
valueOperations = redisTemplate.opsForValue();
}
#Override
public String getTemplateSequenceinString(String key) {
System.out.println("the key recieved is " + key);
return (String) valueOperations.get(key);
}
}
Controller
#Controller
#RequestMapping("/ts")
public class MainController {
#Autowired
private TemplateRepository tmpl;
#GetMapping("/initiate/{templateName}")
public String getTemplate(Model model, #PathVariable("templateName") String templateName) throws IOException {
String key = "100.vo.t1" ;
System.out.println("The answer is "+tmpl.getTemplateSequenceinString(key));
return templateName;
}
}
RedisConfig
#Configuration
#ComponentScan("com.ts.templateService")
public class RedisConfig_1 {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory
= new JedisConnectionFactory();
jedisConFactory.setHostName("localhost");
jedisConFactory.setPort(6379);
return jedisConFactory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
}
The key here is the Serializer, RedisTemplate's default serializer is JdkSerializationRedisSerializer, you should use StringRedisSerializer.
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setDefaultSerializer(new StringRedisSerializer()); // set here
template.setConnectionFactory(jedisConnectionFactory());
return template;
}

Spring Kafka Custom Deserializer

I am following the steps listed in this link to create a customer deserializer. The message that I receive from Kafka has plain text "log message -" before the json string.I want the deserializer to ignore this string and parse the json data. Is there a way to do it?
Application
#SpringBootApplication
public class TransactionauditServiceApplication {
public static void main(String[] args) throws InterruptedException {
new SpringApplicationBuilder(TransactionauditServiceApplication.class).web(false).run(args);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
public static class MessageListener {
#KafkaListener(topics = "ctp_verbose", containerFactory = "kafkaListenerContainerFactory")
public void listen(#Payload ConciseMessage message,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Messasge in group foo: " + message.getStringValue("traceId") + " partion " + partition);
}
}
}
ConsumerConfig
#EnableKafka
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrapAddress:localhost:9092}")
private String bootstrapAddress;
#Value(value = "${groupId:audit}")
private String groupId;
#Bean
public ConsumerFactory<String, ConciseMessage> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(ConciseMessage.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, ConciseMessage> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, ConciseMessage> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
By writing the line new JsonDeserializer<>(ConciseMessage.class), you are just telling Kafka that you want to convert the message to a ConciseMessage type. So, that doesn't make it a custom deserializer. To fix your problem you will most likely have to come up with your own implementation of a deserializer that has the logic to strip the text "log message -".

Resources